Virtual machine performance degradation

The performance of my Virtual Machine seems to degrade overnight.
After installation, I connect to the Administration Services Console and whenever I click on a menu item, the response is very fast.
I switch the virtual machine off, and the next day, it runs very slowly. I have to wait a long time to open the Administration Services Console, and when I click on a menu item, I have to wait a long time for a response.
This keep getting worse, until I have to reinstall the virtual machine, and it is all over again.
I wonder if it all has something to do with the way I switch off the virtual machine. Should I save the state, turn it off or shutdown the guest operating system? I do not need to save the state of the virtual machine, or go back to a particular point in time of the virtual machine.
Has anyone come across this problem yet?
Best regards
Juan Algaba

The help files are good at explaining how to interpret each tab, however there is also a use case section that may guide you in good direction: http://pubs.vmware.com/vrealizeoperationsmanager-6/topic/com.vmware.vcom.core.doc/GUID-78A53AA0-EA64-4C4C-ACFF-D2E3C03A6070.html

Similar Messages

  • Virtual machine performance (vROps & vCOps)

    Hello Team,
    I'm very new to VCOps and VROps
    I have a question regarding Virtual machine performance, Request you to answer please !
    Question :  While checking Virtual Machine performance , what VM resource of sub factors( CPU,Memory and Disk) do I need to check ?? as I found lot of sub categories Inside  of each VM resources and I selected few categories , (like workload and cpu usage and DIsk IO etc.. ) and I found spike  of  the resources, but still could not able to come to one final conclusion and  not sure to give correct advise to the requester on particular things, please help me on this .  How to deal this kind of situations , please explain ....
    Thanks in advance ...
    Sandy

    The help files are good at explaining how to interpret each tab, however there is also a use case section that may guide you in good direction: http://pubs.vmware.com/vrealizeoperationsmanager-6/topic/com.vmware.vcom.core.doc/GUID-78A53AA0-EA64-4C4C-ACFF-D2E3C03A6070.html

  • Virtual machine performance data

    Hi,
    I have a cloud service in our lab environment. In that , i have one virtual machine and one web role. When i access that Virtual machine from azure console, i would see the dashboard and monitor tab. In that page, when i access that monitor tab , have measures
    like grpah for cpu, memory, disk and network related metrics (attached screenshot1). I want to know
    1) where or which location these data's are stored. How these data's are displayed in graph..
    2) I am using the below link (REST API) to collect these details. But i didnt get the last five min data
    http://convective.wordpress.com/2014/06/22/using-azure-monitoring-service-with-azure-virtual-machines/

    Hi;
    Thank you for your post.
    The data is stored in an Counter (Internal Data Base Table). The display works very similarly to what you would see on a OS resource monitor.
    The data is refreshed every 5 min hence you might have noticed a lag.
    Warm Regards
    Prasant

  • How to improve application scalabilty hosted in windows azure virtual machine

    Hi,
    We are going to build and host an application in VM in azure environment where could receive a huge numeber of requests to the server(applicatio in IIS) . How can we handle the scalabilty of application . Is azure has
    any predefined supports ??
    With Regards,
    Selvam.M

    Azure offer both internal and external load balancing capabilities.
    It offers load balaning between virtual machines under the same load balance set.
    Check here
    http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-load-balance/
    You can create multiple Virtual machines hosting your IIS websites than use the service load balanced IP.
    For the Virtual machines performances, you can benefit from the auto-scale Azure capabilities. Autoscale will automatically turn on virtual machines when some performance thresholds are crosses (Memory, CPU usage) and turn them off when it's calm again.
    Look here
    http://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-scale/
    Only Standard virtual machines offer these capabilities, Basic VMs do not
    Regards, Samir Farhat Infrastructure and Virtualization Consultant || Virtualization, Cloud, Azure ? Follow and Ask here https://buildwindows.wordpress.com

  • VMware Fusion Performance: Bootcamp Partition or Virtual Machine?

    I'd like to run ArcGIS 9.3 in Windows XP using VMware Fusion. Can anyone comment on the virtues/drawbacks of using a bootcamp partition versus creating a VMware "Virtual Machine"?
    With bootcamp partition I can gradually increase the size of the partition as the partition becomes full using Drive Genius, correct?
    What about performance?
    Thanks!

    Visit MacTech.com and read their two benchmark reviews of Parallels, VM Fusion, and Boot Camp.
    You cannot "gradually increase the size" of a Boot Camp partition. To change the size you must first delete the existing partition then create a new, larger partition. Doing so will delete the entire Windows system, so be sure to back it up beforehand.

  • 'SERVERNAME' failed to perform the operation. The virtual machine is not in a valid state to perform the operation. (Virtual machine ID "GUID")

    DPM 2012 R2
    Server 2012 R2 and Server 2012 (R1). Both CSV volumes and Hyper-V hosts with non clustered volumes.
    ISCSI Attached Storage. (Nimble SAN)
    Every time a VM backup is made this error message occurs:
    "'SERVERNAME' failed to perform the operation. The virtual machine is not in a valid state to perform the operation. (Virtual machine ID "GUID")"
    The backup seems to finish fine most of the time, but it also seems to be causing some VM's to be in a hung/frozen state when this happens.

    Hi Nordland,
    So it seems that there are some non-backup operation that is trying to occur on the Virtual machine while the backup is in progress. Does this VM have replication enabled on it. Is there a Hyper-V replica server to which this machine get's replicated?
    Regards,
    Siddharth Jha

  • VMM is unable to perform this operation without a connection to a Virtual Machine Manager management server.

    Hi,
    I'm running SCOM and VMM integration. For the most part everything is working. However I get lots of alerts generated in regards to my scvmm server.
    This is a the guide I used:
    https://technet.microsoft.com/en-ca/library/hh882396.aspx
    SCOM + VMM 2012r2 Ur4
    One of the Errors(which appears to be root issue):
    The PowerShell script failed with below
    exception
    System.Management.Automation.CmdletInvocationException: VMM is
    unable to perform this operation without a connection to a Virtual Machine
    Manager management server. (Error ID: 1615)
    Use the Get-VMMServer cmdlet
    or the -VMMServer parameter to connect to a Virtual Machine Manager management
    server. For more information, type at the command prompt: Get-Help Get-VMMServer
    -detailed.At line:109 char:12
    + $vmm = Get-SCVMMServer $VMMServer -Credential
    $cred;
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    at
    System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object
    input, Hashtable errorResults, Boolean enumerate)
    at
    System.Management.Automation.PipelineOps.InvokePipeline(Object input, Boolean
    ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[]
    pipeElementAsts, CommandRedirection[][] commandRedirections, FunctionContext
    funcContext)
    at
    System.Management.Automation.Interpreter.ActionCallInstruction`6.Run(InterpretedFrame
    frame)
    at
    System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame
    frame)
    at
    System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame
    frame)
    Script Name:
    GetStorageSubsystemPerfScript
    One or more workflows were affected by
    this.
    Workflow name:
    Microsoft.SystemCenter.Advisor.StorageSubsystem.Performance.PerfCollection
    Instance
    name: SCVMM01.Domain.com
    Instance ID:
    {B5175EAC-D64D-2553-D567-F19B5C864BBD}
    Management group: OMGRP82

    Can you connect from powershell? get-vmmserver <VMMServerComputerName>
    If this doesn't work it's likely that your VMM server doesn't work. Check Application log to see if there are errors related to VMM service.
    You can also collect traces as described
    here and attach them to this thread.
    Are you using a domain administrator account?  (this is the default allowed after SCVMM is installed - until other users are added).
    also try restarting VMM server or VMM services once.
    this occur if the SQL database owner is indicated as NULL.  Try adding the SQL SA account as db owner.
    refer this once -
    https://social.technet.microsoft.com/Forums/en-US/a4199cb2-ccbf-4bb0-96c1-7a640143f81b/error-id-1615-vmm-is-unable-to-perform-this-operation?forum=virtualmachingmgrhyperv
    Thanks, S K Agrawal

  • Can Virtual Machin system make negative impact to the database performance?

    Can Virtual Machin system make negative impact to the database performance?
    I want to make a virtual machine system in my server and then install Oracle 10g database on the virtual machine system. But I am not sure if Virtual Machin system can make any negative impact to the database performance.
    Thank you

    The virtual machine software vendor must have certified and provided some performance figures like running Oracle on our VM would have a performance hit of n%. VMWare has such a figure published (8% if I recall correctly).
    Besides, the load on physical server, apart from the virtual machine running, would also have the impact on performance.
    Other factors would be
    1. how quickly the host OS can cater to the guest OS with the RAM requirements (it would be good if you could pre-allocate RAM to VM)
    2. with the assumption that you will be running an OS within OS, files on host OS forming the VM, should be pre-allocated in terms of space, avoiding runtime extentions.

  • Why does increasing logical processors in Hyper-V for virtual machine increase performance.

    To my understanding, virtual machines (assuming relative weight is even) get an equal share of the processor. When the VM is given some processing time, it shouldn't matter if the VM is seeing a single core for it to use or multiple cores, since
    the processing comes out of an array of processors on the hyper-v host regardless. However, when I set the "logical processor" setting from 1 to 4 in hyper-v for a particular VM, I see a huge performance increase.
    Specs on my current setup are approximately: Hyper-V host has 32 gb ram, 24 logical processors (wrong word?), a few TB of space.
    VM's are allocated 6 gb ram, 1 or 4 cores, a few hundred GB of space and running 2008 R2.
    I've experience a similar thing on past Hyper-V setups.

    Virtual machines in every hypervisor that I know of are able to use additional virtual CPUs on which to schedule additional concurrent threads of execution.
    It's exactly that layer of abstraction between physical machine and virtual machine that makes it not work the way you describe. The VM is
    not aware of how many cores the physical machine has. The VM does not "see" the physical CPUs (or cores) on the physical machine. The hypervisor gives the VM how ever many virtual CPUs, and the guest OS uses those virtual CPUs to schedule additional concurrent
    threads... The total number of virtual CPUs the hypervisor hands out to the virtual machines can even exceed the number of physical CPUs/cores in the machine.
    Said another way, a virtual machine, when assigned a single vCPU, schedules its threads as if it only had one CPU. It doesn't matter how many cores are in the underlying physical machine. (Though it is worth noting that the physical machine may schedule that
    one VM thread on one physical core for one thread quantum, or time slice, and then run it on a different physical core the next time it's scheduled to run. The virtual machine has no idea any of that is happening though. All it knows is that it can only schedule
    one thread at a time, one after the other, because it only has one virtual CPU.)
    And let's be very clear about our terms here. You assign vCPUs, or virtual CPUs, to VMs, not "cores". Cores (by which I assume you mean physical processing units that share a single physical socket) do not equal vCPUs. There is a layer of abstraction between
    them. If a VM only has 1 vCPU assigned to it, it can only schedule one thread to run at a time. That is why your VM runs faster with 2 -4 virtual CPUs assigned to it - because it is now able to schedule more than one thread to run concurrently.
    However, there is definitely a law of diminishing returns here, as an excessive number of virtual CPUs incur a higher and higher overhead cost in
    things like synchronization, etc.
    There are slight differences between how Hyper-V and VMware hypervisors schedule virtual machine threads for execution, and they differ in their approach to physical resource "oversubscription," but this is a good general concept to start with.

  • How is the performance of Mac Pro if i use it as host for windows and linux virtual machines.

    How is the performance of Mac Pro if i use it as host for windows and linux virtual machines.
    I am planning to buy a high performance PC to run my Windows and Linux servers as vitrual machines for my testing purposes.
    Initially i planned to build my own computer with recommended configurations but considering space constaints and cooling factors i think Mac Pro can be a choice. But need some inputs if Mac pro (Intel Xeon E5, 12 GB RAM) is good for running virtual Machines.

    You could even run Windows natively and still run your VM servers.
    I have seen reports and such on MacRumors and elsewhere - run Windows natively as well as VMs (can also do testing and run Mavericks in a VM under Mavericks)
    The fast internal PCIe-SSD, plus 6 or 8 cores, and 32-64GB RAM. Of course for $5,000 for 8-core, some Thunderbolt storage and 32GB/64GB RAM you can buy some serious hardware.

  • WMIService not returning virtual machines in the list after performing a Hyper-V VM export/import

    Hello,
    Win 8.1, VBscript
    After moving 3 servers to a new PC, my scripts don't work because WMIService is not returning the virtual machines in the list from the query.  The following query shows the problem:
    Option Explicit
    WScript.Echo "vmStatus"
    Dim WMIService
    Dim VMList
    Dim VM
    'Get instance of 'virtualization' WMI service on the local computer
    Set WMIService = GetObject("winmgmts:\\.\root\virtualization\v2")
    'Get all the MSVM_ComputerSystem object
    Set VMList = WMIService.ExecQuery("SELECT * FROM Msvm_ComputerSystem")
    WScript.echo "count "
    WScript.echo VMList.count
    For Each VM In VMList
    WScript.Echo "========================================"
    WScript.Echo "VM Caption: " & VM.Caption
    WScript.Echo "VM Name: " & VM.ElementName
    WScript.Echo "VM GUID: " & VM.Name
    WScript.Echo "VM State: " & VM.EnabledState
    Next
    This is the output:
    vmStatus
    count
    1
    ========================================
    VM Caption: Hosting Computer System
    VM Name: WBCDEVIDEPC2
    VM GUID: WBCDEVIDEPC2
    VM State: 2
    On the previous machine there would be 4 machines in the list, the host plus 3 VM's.
    Any ideas why this might be happening?  Clearly winmgmt is running or the host machine wouldn't have been returned.  I've compared as many things as I can with the previous PC (also win 8.1) and can't find any differences.  I'm assuming
    there is some basic thing about the new PC that isn't setup correctly, but I can't figure it out.
    The VM's that were moved to the new hosting machine are in fact working correctly.  There's a domain controller, a TFS server and a build machine.  The domain is functioning and clients can reach the TFS just fine.
    Thanks.
    Best Regards,
    Alan

    Hi jrv,
    Got it, you're right, it's intuitive - and cool.  Thanks for talking me into it.  It'll take time to get good, but I don't need to be good right now.
    While I was searching around (just before you last post), I found this
    site and as you say, it seems pretty straightforward.
    So when I type get-vm in my powershell, I get nothing back.  That's why I was a little slow on the uptake, I thought I was doing something wrong.
    My vm's are working just fine.  The domain is up, TFS is working, but my vm's are simply not being reported through get-vm.
    Where should I turn next?
    Best Regards,
    Alan

  • Performance degradation encountered while running BOE in clustered set up

    Problem Statement:
    We have a clustered BOE set up in Production with 2 CMS servers (named boe01 and boe02) . Mantenix application (Standard J2EE application in a clustered set up) points to these BOE services hosted on virtual machines to generate reports. As soon as BOE services on both boe01 and boe02 are up and running , performance degradation is observed i.e (response times varies from 7sec to 30sec) .
    The same set up works fine when BOE services on boe02 is turned off i.e only boe01 is up and running.No drastic variation is noticed.
    BOE Details : SAP BusinessObjects environment XIR2 SP3 running on Windows 2003 Servers.(Virtual machines)
    Possible Problem Areas as per our analysis
    1) Node 2 Virtual Machine Issue:
    This currently being part of the Production infrastructure, any problem assessment testing is not possible.
    2) BOE Configuration Issue
    Comparison  report to check the build between BOE 01 and BOE 02 - Support team has confirmed no major installation differences apart from a minor Operating System setting difference.Question being is there some configuration/setting that we are missing ?
    3) Possible BOE Cluster Issue:
    Tests in staging environment  ( with a similar clustered BOE setup ) have proved inconclusive.
    We require your help in
    - Root cause Analysis for this problem.
    - Any troubleshooting action henceforth.
    Another observation from our Weblogic support engineers for the above set up which may or may not be related to the problem is mentioned below.
    When the services on BOE_2 are shutdown and we try to fetch a particular report from BOE_1 (Which is running), the following WARNING/ERROR comes up:-
    07/09/2011 10:22:26 AM EST> <WARN> <com.crystaldecisions.celib.trace.d.if(Unknown Source)> - getUnmanagedService(): svc=BlockingReportSourceRepository,spec=aps<BOE_1> ,cluster:@BOE_OLTP, kind:cacheserver, name:<BOE_2>.cacheserver.cacheserver, queryString:null, m_replaceable:true,uri=osca:iiop://<BOE_1>;SI_SESSIONID=299466JqxiPSPUTef8huXO
    com.crystaldecisions.thirdparty.org.omg.CORBA.TRANSIENT: attempt to establish connection failed: java.net.ConnectException: Connection timed out: connect  minor code: 0x4f4f0001  completed: No
         at com.crystaldecisions.thirdparty.com.ooc.OCI.IIOP.Connector_impl.connect(Connector_impl.java:150)
         at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.createTransport(GIOPClient.java:233)
         at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClientWorkersPool.next(GIOPClientWorkersPool.java:122)
         at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.getWorker(GIOPClient.java:105)
         at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.startDowncall(GIOPClient.java:409)
         at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshalBase(Downcall.java:181)
         at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshal(Downcall.java:298)
         at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.preMarshal(DowncallStub.java:250)
         at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.setupRequest(DowncallStub.java:530)
         at com.crystaldecisions.thirdparty.com.ooc.CORBA.Delegate.request(Delegate.java:556)
         at com.crystaldecisions.thirdparty.org.omg.CORBA.portable.ObjectImpl._request(ObjectImpl.java:118)
         at com.crystaldecisions.enterprise.ocaframework.idl.ImplServ._OSCAFactoryStub.getServices(_OSCAFactoryStub.java:806)
         at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.do(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.a(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getUnmanagedService(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.AbstractStubHelper.getService(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService(Unknown Source)
         at com.crystaldecisions.sdk.occa.managedreports.ps.internal.a$a.getService(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
         at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
    We see the above warning coming 2 or 3 times before the request is processed and then we see the report. We have checked our config's for the cluster but didn't find anything concrete.
    Is this a normal behavior of the software or can we optimize it?
    Any assistance that you can provide would be great

    Rahul,
    I have exactly the same problem running BO 3.1 SP3 in a 2 machine cluster on AIX.  Exact same full install on both machines.  When I take down one of the machines the performance is much better. 
    An example of the problem now is that when i run the command ./ccm.sh -display -username administrator -password xxx on the either box when they are both up and running, I sometimes receive a timeout error (over 15mins)
    If I run SQLplus direct on the boxes to the CMS DB then the response is instant.  Tnspings of course shows no problems
    When I bring down one of the machines and run the command ./ccm.sh -display again then this brings back results in less than a minute...
    I am baffled as to the problem so was wondering if you found anything from your end
    Cheers
    Chris

  • Remote Desktop Services Role on a Virtual Machine (VM) Requirements

    Does MS recommend installing RDS Role on Hardware or Virtual Machine? I have a use case where I have about 35 people that will be using the Internet Explorer and possibly run additional piece of software. I'm having trouble determining if the RDS Role on
    Virtual Machine will be able to sustain the load of so many users. Should the same performance metric used in hardware selection be appropriate to apply for Virtual Machines. We are using VMware on pretty powerful DELL hardware, which is also hosting
    120 existing VMs as of now. So my questions is would VM with 4 CPU's, 8GB of RAM, 80 GB Virtual Disk and 1 GBPs NIC would be handle the job?

    Hi,
    Thank you for posting in Windows Server Forum.
    There is no any particular requirement to run RDS on physical or virtual machine. But if you want to install RD Virtulization role then you need to see that Hyper- V role installed because “when the RD Virtualization Host role service is installed, Server
    Manager checks to see if Hyper-V is installed. If Hyper-V is not installed, Server Manager will install it”. And Hyper-V role cannot be installed on virtual machine so for that you need to install on Physical machine. Other all RDS role can be installed
    on virtual machine also.
    Install the Remote Desktop Virtualization Host Role Service
    In addition, please check below articles.
    1. Remote Desktop Services: Server and client requirements
    2. RDS Hardware Sizing and Capacity Planning Guidance.
    Hope it helps!
    Thanks,
    Dharmesh

  • Performance degradation with addition of unicasting option

    We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
    *-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
    As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
    *-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
    and the following in the remote client node that point to one of the server node like this
    *-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
    But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
    From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
    Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
    Is performance degradation in well-known addressing is a limitation and expected?

    Hi Mahesh,
    From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
    Thanks,
    Mark
    Oracle Coherence

  • Performance degradation with -g compiler option

    Hello
    Our mearurement of simple program compiled with and without -g option shows big performance difference.
    Machine:
    SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
    Compiler:
    CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
    #include "time.h"
    #include <iostream>
    int main(int  argc, char ** argv)
       for (int i = 0 ; i < 60000; i++)
           int *mass = new int[60000];
           for (int j=0; j < 10000; j++) {
               mass[j] = j;
           delete []mass;
       return 0;
    }Compilation and execution with -g:
    CC -g -o test_malloc_deb.x test_malloc.c
    ptime test_malloc_deb.xreal 10.682
    user 10.388
    sys 0.023
    Without -g:
    CC -o test_malloc.x test_malloc.c
    ptime test_malloc.xreal 2.446
    user 2.378
    sys 0.018
    As you can see performance degradation of "-g" is about 4 times.
    Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
    This will give us possibility to open customer core files using non-stripped exe.
    But our tests shows that stripping does not give performance of executable compiled without '-g'.
    So we are losing performance by using this compilation method.
    Is it expected behavior of compiler?
    Is there any way to have -g option "on" and not lose performance?

    In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
    If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
    If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
    If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
    HTH,
    Darryl.

Maybe you are looking for

  • Run an ABAP code from Notepad

    Hello Experts, I have created a standard ABAP code in SE38, one of my requirements is to be able to run codes that are stored in Notepad, I have used the READ DATA SET to read the contents of the Notepad, but how can I execute it?

  • Reinstall Prem elements 9.0 from web

    I had to rebuild a windows 7 laptop I was using one of my Premiere Elements 9.0 copies on. I deactivated before reinstalling win7 and activated on another machine. Now that machine is complaining I've exceeded my limit.  I had two valid working copie

  • Org units

    Hi, I have a requirement as follows.   Company- Client     - Company 1        - Company code 1        - Company code 2     - Company 2        - Company code 3        - Company code 4 I want to know whether client and company need to be defined separa

  • Does't load a companys site

    I can't process pages in companys portal. Type: text/html Render Mode: Quirks mode Enconding: UTF-8 Meta(2 tags) text/javascript text/html; charset=UTF-8

  • Database level triggers

    How to know whether any of following database level triggers are enabled? Database Startup Triggers Logon/Logoff trigger DDL triggers Thanks R