SCOM Performance/Availability metrics on VMWare guests - query

Hi,
I'm monitoring our VMWare server infrastructure with SCOM 2012.  I'm looking for a bit of clarification Re the metrics which SCOM produces to those that are provided by vCenter.
Basically this has stemmed from seeing some alerts such as Available megabytes of Memory monitor which has alerted due to SCOM reporting the agent has < 100MB available yet the vCentre memory reports show guest has well above the threshold available and
therefore shouldnt have alerted.  So when discussing with our server team who would rely on the vCenter metrics, its causing them to lose faith in any metrics/alerts which SCOM is gathering. - the various CPU/Memory performance graphs from SCOM are not
consistent with the vCenter ones.
Can anyone advise if the SCOM metrics are (more) reliable than the native vCenter ones as indicators of health and provide the evidence for arguing the case either way.
Any help much appreciated - and Merry Christmas :-)

Hi,
I think you need to check the memory usage on the guest manually and compare the result. In the productive environment, please test, compare and then decide which one is more reliable and suitable in your environment.
Niki Han
TechNet Community Support

Similar Messages

  • Availability Metric definition

    What metric type/data do I need to define to have Oracle use it for it's "availability metric"? I've defined a simple target, but OEM can't find it's availability status. It reports "Availability metric not found for target ...."
    Do I have to name it something specific, or is this a OEM configuration issue?

    emcli from command prompt - and no, I haven't confused it with emctl ;)
    [oracle@oem ~]$ emcli
    Summary of commands:
      argfile        -- execute emcli verb where verb and arguments are contained in a file
      help             -- get help using emcli
      setup        -- setup emcli to work with an EM management server (OMS)
      sync             -- synchronize the emcli client with an OMS
      Blackout Verbs
        create_blackout        -- create a blackout
        delete_blackout        -- delete a blackout
        get_blackout_details  -- get detailed info for a blackout
        get_blackout_reasons  -- list all blackout reasons
        get_blackout_targets  -- list targets for a blackout
        get_blackouts        -- list blackouts
        stop_blackout        -- stop a blackout
      Cloning Verbs
        clone_as_home         -- clone an Application Server Oracle home.
        clone_crs_home        -- clone a Oracle Clusterware Oracle home.
        clone_database_home   -- clone an Oracle home database.
        extend_as_home        -- extend an Application Server Oracle home.
        extend_crs_home       -- extend a Oracle Clusterware Oracle home.
        extend_rac_home       -- extend a RAC Oracle home.
      Credential Verbs
        set_credential        -- set preferred credentials for given users
        update_password        -- update passwords for a given target
      Execute Command Verbs
        execute_hostcmd        -- execute a host command
        execute_sql        -- execute a sql command
      Group Verbs
        create_group        -- create a group
        delete_group        -- delete a group
        get_group_members        -- list the members in a group
        get_groups             -- list all groups
        modify_group        -- modify a group
      Job Verbs
        delete_job             -- delete a specified job
        get_jobs             -- get a list of existing jobs
        retry_job             -- re-start a previously failed job execution
        stop_job             -- stop a specified job
        submit_job             -- submit a job
      Notification Verbs
        subscribeto_rule        -- subscribe user to rule with email notification
      Provisioning Verbs
        provision             -- provision a hardware server.
      Redundancy Group Verbs
        create_red_group        -- create a group
        modify_red_group        -- modify a redundancy group
      Services Verbs
        add_beacon          -- adds a beacon to monitoring beacons of service
        apply_template_tests  -- Apply an XML Service Template to a Target
        assign_test_to_target -- assigns a test-type to a specified target-type and version
        change_service_system_assoc   -- changes the system for a particular service
        create_aggregate_service        -- create an aggregate service
        create_service        -- creates a Service of given type
        delete_metric_promotion -- deletes a metric promotion on a service
        delete_test        -- deletes a Service Test
        disable_test        -- disables a Service Test monitoring
        enable_test        -- enables a Service Test monitoring
        extract_template_tests        -- Extract a Service Template as XML
        get_aggregate_service_info        -- get timezone region and availability evaluation function
        get_aggregate_service_members -- get sub-service members of the aggregate service
        modify_aggregate_service        -- modify an aggregate service
        remove_beacon          -- removes a beacon from monitoring beacons of service
        remove_service_system_assoc   -- removes the System for a Service of given type
        set_availability          -- sets availability type of service
        set_key_beacons_tests -- sets the key beacons and tests of a service
        set_metric_promotion -- promotes a system/test based metric to performance/usage metric of a service
        set_properties          -- sets properties on a test or test, beacon level
        sync_beacon        -- syncronize a beacon which is monitoring the target (reloads all collections to beacon)
      System Verbs
        create_system        -- create a system
        delete_system        -- delete a system
        get_system_members        -- list the members in a system
        modify_system        -- modify a system
      Target Data Verbs
        add_target             -- add a target to the repository
        delete_target        -- delete a specified target
        get_targets        -- get status and blackout info for targets
        modify_target        -- modify a target instance definition
        relocate_targets        -- relocate targets from one agent to another
      User Administration Verbs
        create_role        -- create a new role
        create_user        -- create a new user
        delete_role        -- delete an existing role
        delete_user        -- delete an existing user
        modify_role        -- modify an existing role
        modify_user        -- modify an existing userAnd to show it doesn't exist:
    [oracle@oem ~]$ emcli add_mp_to_mpa
    Error: The command name "add_mp_to_mpa" is not a recognized command.Oracle homeis set to OMS. Maybe that's the issue?
    Well, my issue with the guide is, that I'm not totally sure on creating a report within OEM. I tried and it totally blew up on me. What would be really nice is an end-to-end example. Doesn't matter if it's "just" an OS command that's going to be interpreted; but the examples in the guide are incoherent and not related. And some are incomplete. It makes it hard to learn/understand. The missing reference manual doesn't make it better.
    I know of the DTDs (why is Oracle still using DTDs and not using Schemas?) - however they don't explain COMPUTE_EXPR for instance (ie the expression language). And there's a lot of "holes" in explaining the different attributes and contexts. However, I've found the DTDs helpful through XML Spy because it will control what I can type where. Less error prune that way (I know, JDeveloper should be able to do that too, but it's too much work setting JDeveloper up for this stuff - and I don't even think JDeveloper will support DTDs at all to manage XML files).
    I'm glad I have the latest version; but as you see I still seem to be quite lost ;) I've looked many times and I cannot find the section that talks about "availability" metrics?
    I understand we can't have examples for everything - but a cohisive example through the guide would help. And a reference manual that goes into details with each attribute/element. If the DTD would have been complete there, it should be fairly easy to get the initial version going?

  • How to analyse the performance by using RSRTand byseeing the query results

    Hi,
    I want to see the performance of the query in each land scape. I have executed my query
    using the transaction RSRT.  Ho w can we analyse the query reuires aggregats or not.
    I have taken the no. of records in cube . I also saw the number of records in the aggregates.
    I didnot get the clear picture.
    I selected the options Aggregates , Statistics and donot use cache. Query got execute and it displays one report . But I am unable to analyse the performace.
    Can anyone please guide me with steps . Which factors we need to consider for the performace point of view.
    Points will be rewarded.
    Thanks in advacne for all your help.
    Vamsi

    Hi,
    This info may be helpful.
    General tips                                   
    Using aggregates and compression.          
    Using  less and complex cell definitions if possible.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.               
    Using cache memoery will decrease the loading time of the report.                                        
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.                                        
    Also try                                        
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.                                        
    2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.                                        
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.                                        
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.                                        
    Refer.                                        
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm                                   
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.                         
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report                         
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.               By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.                         
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm                                   
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me                         
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2                              
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c                    
    Performance of BW infocubes                                        
    Go to SE38                    
    Run the program SAP_INFOCUBE_DESIGNS     
    It will shown dimension Vs Fact tables Size in percent     If you mean speed of queries on a cube as performance metric of cube,measure query runtime.                         
    You can go to T-Code DB20 which gives you all the performance related information like                                   
    Partitions                         
    Databases                         
    Schemas                         
    Buffer Pools                    
    Tablespaces etc                                        
    Thanks,
    JituK

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • How can I perform this kind of range join query using DPL?

    How can I perform this kind of range join query using DPL?
    SELECT * from t where 1<=t.a<=2 and 3<=t.b<=5
    In this pdf : http://www.oracle.com/technology/products/berkeley-db/pdf/performing%20queries%20in%20oracle%20berkeley%20db%20java%20edition.pdf,
    It shows how to perform "Two equality-conditions query on a single primary database" just like SELECT * FROM tab WHERE col1 = A AND col2 = B using entity join class, but it does not give a solution about the range join query.

    I'm sorry, I think I've misled you. I suggested that you perform two queries and then take the intersection of the results. You could do this, but the solution to your query is much simpler. I'll correct my previous message.
    Your query is very simple to implement. You should perform the first part of query to get a cursor on the index for 'a' for the "1<=t.a<=2" part. Then simply iterate over that cursor, and process the entities where the "3<=t.b<=5" expression is true. You don't need a second index (on 'b') or another cursor.
    This is called "filtering" because you're iterating through entities that you obtain from one index, and selecting some entities for processing and discarding others. The white paper you mentioned has an example of filtering in combination with the use of an index.
    An alternative is to reverse the procedure above: use the index for 'b' to get a cursor for the "3<=t.b<=5" part of the query, then iterate and filter the results based on the "1<=t.a<=2" expression.
    If you're concerned about efficiency, you can choose the index (i.e., choose which of these two alternatives to implement) based on which part of the query you believe will return the smallest number of results. The less entities read, the faster the query.
    Contrary to what I said earlier, taking the intersection of two queries that are ANDed doesn't make sense -- filtering is the better solution. However, taking the union of two queries does make sense, when the queries are ORed. Sorry for the confusion.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Implementing BitLocker on Windows 7 Ultimate in a VMWare Guest

    Wow...Prior to this problem, printer issues were the most annoying.I have a 32-bit instance of Windows 7 Ultimate installed in a VMWare guest. The host is a brand new business class Core i7 laptop with 16 GB RAM.I am trying to implement BitLocker in the guest OS, but the first issue was that it couldn't find a TPM (Trusted Platform Module). Understandable considering it was in a virtual world. So I found the group policy option to allow BitLocker to work without a TPM; check. BitLocker let me go to the next step. I saved the startup key to a USB flash drive (which it prompts me to do), and it appears it did what it needed to do because it wouldn't let me go to the next step otherwise. There is then a checkbox with an option to check the system before disk encryption begins. I check that because I need to know it works 100% before I...
    This topic first appeared in the Spiceworks Community

    Hi,
    Have you tried using manage-bde command to decrypt this partition?
    manage-bde -unlock Volume -pw *********
    Andy Altmann
    TechNet Community Support

  • Same Numbers file opens fine under host but blank under VMWare guest

    Am working toward a daily use virtual machine as an operating environment.  Using VMWare Fusion 4 and Lion 10.7.4 for both host and guest.  Host and guest each have 4 GB RAM allocated and gobs of disk space.
    Have iWork '09 installed under both host and guest.  Trying to use a Numbers worksheet that's stored both in iCloud and DropBox - doesn't matter which, same results.  This is the same physical file accessed from both host and guest.  In the host, worksheet opens fine.  In the guest, Numbers '09 starts fine, but no worksheet contents show - the tree shows on the left, but worksheet shows blank.  Have tried pulling in backup copy and re-uploading to DropBox.  Same results.  Same behavior for Pages .09.
    Other applications (other than iWork) seem OK so far.
    Anyone have an idea what may be going on?
    Thanks!

    UPDATE:
    I have backed up to Lion 10.7.3 in a new vmware guest, reinstalled iWork '09 from the retail disk into the guest.  No change.  Pages, Numbers and Keynote work spaces all show empty on guest screen (even though same files open fine under hot side).
    I have downloaded latest Parallels, installed, created Lion 10.7.4 guest, installed iWork '09 from retail disk and same results.  Host works fine, guest loads but shows blanks screen.
    I had a note from one person who said they had similar setup and same s/w versions all around.  Says his works fine under guest.  So far, only difference I can see is that he's running Numbers and Pages apps he downloaded from the App Store whereas I'm using an iWork retail disk.  We're both using iWork '09 version 2.1.
    Really now, could THAT be the problem? 
    Ideas anyone?
    Thanks.

  • "cannot perform a DML operation inside a query" error when using table func

    hello please help me
    i created follow table function when i use it by "select * from table(customerRequest_list);"
    command i receive this error "cannot perform a DML operation inside a query"
    can you solve this problem?
    CREATE OR REPLACE FUNCTION customerRequest_list(
    p_sendingDate varchar2:=NULL,
    p_requestNumber varchar2:=NULL,
    p_branchCode varchar2:=NULL,
    p_bankCode varchar2:=NULL,
    p_numberOfchekbook varchar2:=NULL,
    p_customerAccountNumber varchar2:=NULL,
    p_customerName varchar2:=NULL,
    p_checkbookCode varchar2:=NULL,
    p_sendingBranchCode varchar2:=NULL,
    p_branchRequestNumber varchar2:=NULL
    RETURN customerRequest_nt
    PIPELINED
    IS
    ob customerRequest_object:=customerRequest_object(
    NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
    condition varchar2(2000 char):=' WHERE 1=1 ';
    TYPE rectype IS RECORD(
    requestNumber VARCHAR2(32 char),
    branchRequestNumber VARCHAR2(32 char),
    branchCode VARCHAR2(50 char),
    bankCode VARCHAR2(50 char),
    sendingDate VARCHAR2(32 char),
    customerAccountNumber VARCHAR2(50 char),
    customerName VARCHAR2(200 char),
    checkbookCode VARCHAR2(50 char),
    numberOfchekbook NUMBER(2),
    sendingBranchCode VARCHAR2(50 char),
    numberOfIssued NUMBER(2)
    rec rectype;
    dDate date;
    sDate varchar2(25 char);
    TYPE curtype IS REF CURSOR; --RETURN customerRequest%rowtype;
    cur curtype;
    my_branchRequestNumber VARCHAR2(32 char);
    my_branchCode VARCHAR2(50 char);
    my_bankCode VARCHAR2(50 char);
    my_sendingDate date;
    my_customerAccountNumber VARCHAR2(50 char);
    my_checkbookCode VARCHAR2(50 char);
    my_sendingBranchCode VARCHAR2(50 char);
    BEGIN
    IF NOT (regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}$')
    OR regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}[[:space:]]{1}[[:digit:]]{2}:[[:digit:]]{2}:[[:digit:]]{2}$')) THEN
    RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,5));
    ELSIF (p_sendingDate IS NOT NULL) THEN
    dDate:=TO_DATE(p_sendingDate,'YYYY/MM/DD hh24:mi:ss','nls_calendar=persian');
    dDate:=trunc(dDate);
    sDate:=TO_CHAR(dDate,'YYYY/MM/DD hh24:mi:ss');
    condition:=condition|| ' AND ' || 'sendingDate='||'TO_DATE('''||sDate||''',''YYYY/MM/DD hh24:mi:ss'''||')';
    END IF;
    IF (p_requestNumber IS NOT NULL) AND (cbdpkg.isspace(p_requestNumber)=0) THEN
    condition:=condition|| ' AND ' || ' requestNumber='||p_requestNumber;
    END IF;
    IF (p_bankCode IS NOT NULL) AND (cbdpkg.isspace(p_bankCode)=0) THEN
    condition:=condition|| ' AND ' || ' bankCode='''||p_bankCode||'''';
    END IF;
    IF (p_branchCode IS NOT NULL) AND (cbdpkg.isspace(p_branchCode)=0) THEN
    condition:=condition|| ' AND ' || ' branchCode='''||p_branchCode||'''';
    END IF;
    IF (p_numberOfchekbook IS NOT NULL) AND (cbdpkg.isspace(p_numberOfchekbook)=0) THEN
    condition:=condition|| ' AND ' || ' numberOfchekbook='''||p_numberOfchekbook||'''';
    END IF;
    IF (p_customerAccountNumber IS NOT NULL) AND (cbdpkg.isspace(p_customerAccountNumber)=0) THEN
    condition:=condition|| ' AND ' || ' customerAccountNumber='''||p_customerAccountNumber||'''';
    END IF;
    IF (p_customerName IS NOT NULL) AND (cbdpkg.isspace(p_customerName)=0) THEN
    condition:=condition|| ' AND ' || ' customerName like '''||'%'||p_customerName||'%'||'''';
    END IF;
    IF (p_checkbookCode IS NOT NULL) AND (cbdpkg.isspace(p_checkbookCode)=0) THEN
    condition:=condition|| ' AND ' || ' checkbookCode='''||p_checkbookCode||'''';
    END IF;
    IF (p_sendingBranchCode IS NOT NULL) AND (cbdpkg.isspace(p_sendingBranchCode)=0) THEN
    condition:=condition|| ' AND ' || ' sendingBranchCode='''||p_sendingBranchCode||'''';
    END IF;
    IF (p_branchRequestNumber IS NOT NULL) AND (cbdpkg.isspace(p_branchRequestNumber)=0) THEN
    condition:=condition|| ' AND ' || ' branchRequestNumber='''||p_branchRequestNumber||'''';
    END IF;
    dbms_output.put_line(condition);
    OPEN cur FOR 'SELECT branchRequestNumber,
    branchCode,
    bankCode,
    sendingDate,
    customerAccountNumber ,
    checkbookCode ,
    sendingBranchCode
    FROM customerRequest '|| condition ;
    LOOP
    FETCH cur INTO my_branchRequestNumber,
    my_branchCode,
    my_bankCode,
    my_sendingDate,
    my_customerAccountNumber ,
    my_checkbookCode ,
    my_sendingBranchCode;
    EXIT WHEN (cur%NOTFOUND) OR (cur%NOTFOUND IS NULL);
    BEGIN
    SELECT requestNumber,
    branchRequestNumber,
    branchCode,
    bankCode,
    TO_CHAR(sendingDate,'yyyy/mm/dd','nls_calendar=persian'),
    customerAccountNumber ,
    customerName,
    checkbookCode ,
    numberOfchekbook ,
    sendingBranchCode ,
    numberOfIssued INTO rec FROM customerRequest FOR UPDATE NOWAIT;
    --problem point is this
    EXCEPTION
    when no_data_found then
    null;
    END ;
    ob.requestNumber:=rec.requestNumber ;
    ob.branchRequestNumber:=rec.branchRequestNumber ;
    ob.branchCode:=rec.branchCode ;
    ob.bankCode:=rec.bankCode ;
    ob.sendingDate :=rec.sendingDate;
    ob.customerAccountNumber:=rec.customerAccountNumber ;
    ob.customerName :=rec.customerName;
    ob.checkbookCode :=rec.checkbookCode;
    ob.numberOfchekbook:=rec.numberOfchekbook ;
    ob.sendingBranchCode:=rec.sendingBranchCode ;
    ob.numberOfIssued:=rec.numberOfIssued ;
    PIPE ROW(ob);
    IF (cur%ROWCOUNT>500) THEN
    CLOSE cur;
    RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,4));
    EXIT;
    END IF;
    END LOOP;
    CLOSE cur;
    RETURN;
    END;

    Now what exactly would be the point of putting a SELECT FOR UPDATE in an autonomous transaction?
    I think OP should start by considering why he has a function with an undesirable side effect in the first place.

  • Query.setRange() not available on standard javax.jdo.Query interface?

    Hi Kodo people, I'm trying out Kodo 3.2.0, most specifically for the
    spiffy new setRange() functionality. However, I'm somewhat confused
    because while the docs clearly indicate that the setRange() method is
    available on the standard javax.jdo.Query interface, it's not available
    on the javax.jdo.Query class that ships with Kodo 3.2.0. I have to cast
    my query to a KodoQuery in order for this to work. Is this expected?
    Thanks,
    -Mike
    Michael Allen
    Technical Lead
    PGP Corporation

    I think you missed the note at the beginning of the query chapter in the docs:
    Much of the functionality we discuss in this chapter is new to JDO 2. Though
    Kodo supports all of the features defined in the following sections, many JDO
    implementations may not. Additionally, because official JDO 2 jars are not yet
    available, you will have to cast your query objects to kodo.query.KodoQuery to
    access any JDO 2 APIs. The UML diagram above depicts these APIs in bold. For
    simplicity, casts have been left out of the example code throughout the chapter.

  • ORA-14551: cannot perform a DML operation inside a query

    I have a Java method which is deployed as a Oracle function.
    This Java method parses a huge XML & populates this data
    into a set of database tables.
    I have to call this Oracle function in a unix shell script using sqlplus.
    Value returned by this function will be used by the shell script to decide
    what to do next.
    I am calling the Oracle Java function as follows in the shell script:
    echo "SELECT XML_TABLES.RUN_XML_LOADER('$P1','$P2','$P3','$P4') FROM DUAL;\n" | sqlplus $DB_USER > $LOG
    This gives error - "ORA-14551: cannot perform a DML operation inside a query".
    If I have to add a AUTONOMOUS_TRANSACTION pragma to this Java function,
    where to I add it considering, that the definition of the function is in a Java class.
    Can we do it in call spec?
    create or replace package XML_TABLES is
    function RUN_XML_LOADER(xmlFile IN VARCHAR2,
    xmlType IN VARCHAR2,
    outputDir IN VARCHAR2,
    logFileDir IN VARCHAR2) RETURN VARCHAR2 AS
    LANGUAGE JAVA NAME 'XmlLoader.run
    (java.lang.String, java.lang.String, java.lang.String, java.lang.String)
    return java.lang.String';
    end XML_TABLES;
    If not is there any other way to acheive this?
    Thanks in advance.
    Sunitha.

    If I have to add a AUTONOMOUS_TRANSACTION pragma to this Java function,You'd have to write a PL/SQL function that calls the JSP. But I would caution you about using that pragma. It does introduce tremendous complexity into processing.
    As I see it you only need a function to return the result code so why not use a procedure with an OUT parameter?
    Cheers, APC
    Of course Yoann's suggestion of using an anonymous block would work too.
    Message was edited by:
    APC

  • VMWare Guests can't bridge in to Wifi

    Network setup: WLC4402, 1141 APs.  DHCP is required on all SSIDs. 
    A co-worker has a setup where his laptop runs VMWare and is attached to Wifi.  On his guest Virtual Machines, he notices that they work fine when in NAT mode, but when he tries to bridge them on to the WiFi network, DHCP requests timeout.  I duplicated his configuration with VMWare player, and got the same behavior.  
    When we plug in to wired connections, the Guests are able to bridge on to the network just fine.
    Any ideas on this?  I'm thinking perhaps the DHCP requirement option may be preventing the VMWare guests from bridging. 

    Forgot to mention we're running 6.0.202.0 on the 4402.  Haven't bothered upgrading to 7.0 as I don't think there's any new features.
    Sounds like changing the APs to H-REAP mode is the best fix.  Our network is already configured for trunking so it shouldn't be that big of an issue.

  • I've installed Photoshop CS5 on a new latop but it doesn't recognize my RAW files.  Additionally, repeated attempts to perform available updates have failed.  The laptop has Windows 8.1 64Bit.

    I've installed Photoshop CS5 on a new latop but it doesn't recognize my RAW files.  Additionally, repeated attempts to perform available updates have failed.  The laptop has Windows 8.1 64Bit.

    I don't know who you talked to, but some of the support techs tend to look at CS5 as "no longer sold or supported" and usually don't want to mess with it.
    We are mostly user-volunteers and have time to help you out. Some of us get perks from Adobe, but we are not employees.
    I actually gave you the links to the two very important CS5 updates, but you want to peruse this general list Adobe - Photoshop : For Windows
    and get the Bridge CS5 updates as well. Adobe - Bridge : For Windows

  • Deploy Server 2012 to VMWare Guest using Paravirtual SCSI

    Hi,
    I attempting to use Boot Media to boot a VMware Guest OS to image with Server 2012.
    I have imported the following drivers from the x64 VMware Tools:
    VMware PVSCSI Controller
    VMware PCI Ethernet Adapter
    vmxnet3 Ethernet Adapter
    I have added the drivers to the x64 Boot image in SCCM 20012 R2 then I went ahead and created the Boot media
    When the VMware Guest boots up the boot image I can manually configure an IP address and successfully retrieve the Assigned Task Sequence (Based on a manually imported Computer Object/MAC address)
    Long story short... I realized that the Imaging process is failing at the Partition/Format section of the Task Sequence.
    Upon further investigation I can clearly see that the "C:\ Drive" is not being detected.
    what am I missing? are there any suggestions out there or has someone else had any similar issues in their environment?

    Hi,
    The smsts.log does explicitly reflect all activity so if you have a specific point in time you are curious about, you can easily verify by checking the log.
    Press F8, use the Diskpart command to format partition.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Getting error SQL Error : ORA-14551: cannot perform a DML operation inside a query

    Hi gurus ,
    Your help is greatly appreciated ..
    I am doing some changes in the fucntion for an existing package .Introducing the new below check , am updating one of the tables based on a if condition ..
           IF  numALLOWED_COUNT >= numLAST_COUNT_ADDED+1  THEN
                     blnGDS_Allowed :=True;
                      varSTMT := 'UPDATE PROD.TMS_PROCESS_COUNTER ';
                      varSTMT := varSTMT ||' SET last_count_added = last_count_added+1';
                      varSTMT := varSTMT ||' WHERE process_name = ''DAILY_GDS_COUNT''';
                      varSTMT := varSTMT ||' AND COUNTER_IND = ''750FD130''';
                     PROC_LOG('Update Tms_Process_counter varSTMT --' || varSTMT);
                     IF INSERT_BATCH(99,varSTMT) > 0 THEN
                        NULL;
                     END IF;
    Function for insert_batch :
    UNCTION INSERT_BATCH(numTABLE_ID IN NUMBER, varSQL_STATEMENT IN VARCHAR2) RETURN NUMBER IS
    varINSERT_BATCH_STMT  VARCHAR2(32767)     := NULL;
    varADD_REC_TYPE       BATCH_TABLES.ADD_REC_TYPE%TYPE;
    BEGIN
        PROC_LOG( 'INSIDE INSERT_BATCH IRC : ' || varSQL_STATEMENT );  --IRC 9/20 UC
        INSERT INTO BATCH_STATEMENT(QUEUE_ID,TABLE_ID,STATEMENT,QUEUE_SEQUENCE_ID)
        VALUES (numQUEUE_ID,numTABLE_ID,varSQL_STATEMENT,1);
    RETURN 1;
    EXCEPTION WHEN OTHERS THEN
        PROC_LOG('Failed in INSERT_BATCH');
        PROC_LOG('SQL Error : ' || SUBSTR(SQLERRM,1,1000));
        RETURN -1;
    END INSERT_BATCH;
    desc PROD.BATCH_STATEMENT
      QUEUE_ID           NUMBER(15)                 NOT NULL
      TABLE_ID           NUMBER(2)                  NOT NULL
      STATEMENT          VARCHAR2(4000 BYTE)        NOT NULL
      QUEUE_SEQUENCE_ID  NUMBER(5)                  NOT NULL
    Some how when its calling the insert_batch , its giving me the error in the logs as below:
    04:01:41 - Update Tms_Process_counter varSTMT --UPDATE PROD.TMS_PROCESS_COUNTER  SET last_count_added = last_count_added+1 WHERE process_name = 'DAILY_GDS_COUNT' AND COUNTER_IND = '750FD130'
    04:01:41 - INSIDE INSERT_BATCH IRC : UPDATE PROD.TMS_PROCESS_COUNTER  SET last_count_added = last_count_added+1 WHERE process_name = 'DAILY_GDS_COUNT' AND COUNTER_IND = '750FD130'
    04:01:41 - Failed in INSERT_BATCH
    04:01:41 - SQL Error : ORA-14551: cannot perform a DML operation inside a query

    Some how when its calling the insert_batch , its giving me the error in the logs as below:
    04:01:41 - SQL Error : ORA-14551: cannot perform a DML operation inside a query
    Yes - and the exception is telling you EXACTLY what the problem is. You have a query
    IF INSERT_BATCH(99,varSTMT) > 0 THEN
    And you are performing a DML operation inside that query:
    INSERT INTO BATCH_STATEMENT(QUEUE_ID,TABLE_ID,STATEMENT,QUEUE_SEQUENCE_ID)
        VALUES (numQUEUE_ID,numTABLE_ID,varSQL_STATEMENT,1);
    Like the exception says: you can't do that.
    You need to call the function using PL/SQL and capture the return value into a variable. Then test that variable:
    myVar := INSERT_BATCH(99,varSTMT);
    if myVar > 0 THEN

  • Performance monitoring Metrics required to idntify meamory , IO issues

    Hello All,
    Can some one , come up with your ideas to identify the  metrics required to capture the meamory or I/O issues in sql server .
    i would like to know the exact metrics to identify when my clients says meamory or i/o performance issues
    thank you
    hemadribabu
    hemadri

    Hi Hemadribabu,
    According to your description, you want to know the performance monitoring metrics on memory and I/O. About memory, there are top six SQL monitor metrics for analysis, such as Avg. CPU Queue Length, Avg. Disk Queue Length, Memory Pages/Sec, Latch Wait Time,
    Buffer Page Life Expectancy and  Average Lock Wait Time. These metrics provide ways to quickly understand the general performance characteristics of your system. For more information, you can review the following articles.
    https://www.simple-talk.com/sql/sql-tools/top-six-sql-monitor-metrics-for-analysis/
    http://www.manageengine.com/products/applications_manager/sql/memory-monitoring.html
    About I/O issue, there are some factors affect I/O, such as overload disks, poorly defined queries, poorly performing disk and so on. For more information about disk I/O, you can review the following blog.
    http://blog.scoutapp.com/articles/2011/02/10/understanding-disk-i-o-when-should-you-be-worried
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

Maybe you are looking for

  • Adding Journal Entry  throgh Incoming Payments,

    Hi Friend,               I need Help .... I need to create a journal entry using incoming payment. I have different types of deduction Accounts. All deductions need to display in  journal entry and also its reflect in Invoice.Then only invoice going

  • Ipad mini retina cannot share hot spot with Bluetooth for Iphon5

    Ipad mini retina cannot share hot spot with Bluetooth for Iphon5 But When my Iphon5 share hot spot with Bluetooth for Ipad, I can share.

  • SharePoint lists.asmx web service error in InfoPath and SOAPUI

    We have a SharePoint 2013 farm with web applications using Claims Authentication. I'm trying to create an InfoPath 2013 form where I want to add lists.asmx (https://servername/_vti_bin/lists.asmx) web service to receive/submit data, but I'm getting b

  • Firefox will not go to mail after log in to hotmail.co.uk

    Message Internal server error - read followed by long reference. I have uninstallled and reinstalled Firefox without clearing problem. Had worked for 2+years without a problem. == URL of affected sites == http://hotmail.co.uk == User Agent == Mozilla

  • Multiple layouts - each patch a different layout

    Hello, I am looking for a way to change the layout for each patch individually, instead of just having one layout per concert. I'm working on a one man show, so let's say for one song I am singing, so I want lots of parameters for the voice plugins.