Performance of D14 Server

I just started an instance of a D14 Azure server (100+GB Ram and 16 Cores). 
I started a standard test I use for hardware. I'm sorry to have to say this, but my Desktop (16GB Ram, 8 Cores) is performing faster.
Is the VM real in the sense that I actually have 100GB and 16 Cores, or am I actually sharing it with other people?

Hi,
I understand that you are not happy with the Performance of the D14 VM.
I request you tell me as to how you are measuring the performance of the VM so that the issue  can  be  narrowed  down.
Looking forward for your reply.
Regards,
Sowmya

Similar Messages

  • How to increase the performance of  Weblogic server 7.0?

    How to increase the performance of Weblogic server 7.0 ?
    also, how do i avoid typing the server login and password evertime I start the
    webserver?

    How to increase the performance of Weblogic server 7.0 ?It depends on what is not running fast enough for you.
    also, how do i avoid typing the server login and password evertime I
    start the webserver?In the startWebLogic shell script (.cmd or .sh) add:
    set WLS_USER=weblogic
    set WLS_PW=password
    (Replace "password" with whatever your password is.)
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "winston" <[email protected]> wrote in message
    news:3fe42d33$[email protected]..
    >

  • Heat affects performance of my server

    hello 
    I have a windows server 2012 with 4 gigs of ram in a place that is not datacenter 
    ie no refrigeration, this affects the performance of my server ..

    Are you asking if a warmer temperature will affect the speed at which the processor runs?
    In general, the answer is no.  However, you should talk to your hardware vendor to see if they have implemented any thermal controls.  Modern server processors have the capability to run at different speeds, and this can be changed via software. 
    The fast they perform, the warmer the CPUs get.  Your vendor may implement logic that says if a system reaches x temperature it will force the CPU to run at a lower speed. You need to ask your server vendor if they have implemented anything like this.
    .:|:.:|:. tim

  • How to Improve the Performance of SQL Server and/or the hardware it resides on?

    There's a particular stored procedure I call from my ASP.NET 4.0 Web Forms app that generates the data for a report.  Using SQL Server Management Studio, I did some benchmarking today and found some interesting results:
    FYI SQL Server Express 2014 and the same DB reside on both computers involved with the test:
    My laptop is a 3 year old i7 computer with 8GB of RAM.  It's fine but one would no longer consider it a "speed demon" compared to what's available today.  The query consistently took 30 - 33 seconds.
    My client's server has an Intel Xeon 5670 Processor and 12GB of RAM.  That seems like pretty good specs.  However, the query consistently took between 120 - 135 seconds to complete ... about 4 times what my laptop did!
    I was very surprised by how slow the server was.  Considering that it's also set to host IIS to run my web app, this is a major concern for me.   
    If you were in my shoes, what would be the top 3 - 5 things you'd recommend looking at on the server and/or SQL Server to try to boost its performance?
    Robert

    What else runs on the server besides IIS and SQL ? Is it used for other things except the database and IIS ?
    Is IIS causing a lot of I/O or CPU usage ?
    Is there a max limit set for memory usage on SQL Server ? There SHOULD be and since you're using IIS too you need to keep more memory free for that too.
    How is the memory pressure (check PLE counter) and post results.
    SELECT [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
    Check the error log and the event viewer maybe something bad there.
    Check the indexes for fragmenation, see if the statistics are up to date (and enable trace flag 2371 if you have large tables > 1 million rows)
    Is there an antivirus present on the server ? Do you have SQL processes/services/directories as exceptions ?
    There are lot of unknowns, you should run at least profiler and post results to see what goes on while you're having slow responses.
    "If there's nothing wrong with me, maybe there's something wrong with the universe!"

  • Performance in the Server with the created report - Crystal Report 2008

    Hi
          We have developed a complex report and deployed in the server. It takes more time to process and once if the report created and we are trying to expand a group/navigate different pages, the same time delay is happeneing. Sometimes it takes more than a minute which really kills the performance. My client has accepted the delay while the report is being genrated but  they raise question like why it takes too much time just to display the created report.
      I tried keeping the created report in Session and rebind on page_load as well stored a copy on the server and rebind whenver postback happens. Both methods working simliar and no improvement the in the performance. Is there any way to kill the time delay in this.( we have an idea like , if we store the created report  at client machine and whenever postback happens just bind from client machine instead to hit the server back, Is this achievable)
    Any Help would helpful to me
    Thanks

    Hi
    What version of CR are you using?
         Crystal report 2008, I think its in subject of this thread itself. (Version 12.0.0.683)
    What if any CR SP have you applied?
        No
    What version of .NET are you using?
      .Net 3.5 using C# technology (.Net framework 3.5)
    What database are you using?
        SQL Server2008
    What connection type are you using?
       We have created a Stored Procedure and connected the SP to the crystal report.
    What are you comparing the performance to? E.g.; is it faster somewhere else? How much faster?
        No, we didnt compare anywhere. but my question is to avoid the delay in navigation of pages or selecting an item in the group tree. The report contains 100+ pages and client is accepting delay while creating the report but not during navigation or selection( why do the already created report takes more time(even in minutes) to render on navigation or selection and how to avoid the delay))
    Define "complex report ".
    Nothing complexity in the report even we have used oly less formulas(less than 5) but chats are included and used for each group( we have 6 groups in the report)
    Thanks

  • Performance Problem Database Server on Solaris 10 5/08 (Update 5)  v890 Box

    Hello,
    I am having performance problems on Solaris 10 5/08 (Update 5) Production server below I have mentioned the details information about the system and some command “iostat” and “vmstat” reports:
    Sun Fire V890
    Ram – 32 GB
    Physical CPU – 8 (Logical 16)
    Application – Oracle 10G
    Raid 1
    1) iostat report
    extended device statistics tty
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout
    md100 0.3 7.0 2.4 7.0 0.0 0.2 38.5 3 4 0 0
    2/md3100 205.0 27.4 23099.6 472.2 0.0 2.5 10.9 0 79
    2/md3200 225.3 27.3 24993.8 577.3 0.0 2.6 10.5 0 81
    2/md3300 295.8 25.5 31538.0 412.5 0.0 3.1 9.8 0 84
    2/md3400 198.3 25.6 21032.9 423.7 0.0 2.2 9.8 0 65
    2/md3500 55.0 313.0 2397.5 2425.9 0.0 1.2 3.2 0 93
    2/md3600 0.1 19.2 0.8 1440.0 0.0 0.1 5.4 0 10
    2/md3700 3.1 0.8 48.8 35.7 0.0 0.0 9.3 0 4
    2/md3800 300.4 0.7 30651.9 69.4 0.0 2.6 8.7 0 92
    2/md3900 359.9 0.7 35876.8 56.6 0.0 3.3 9.1 0 95
    2/md4100 0.6 512.1 9.6 1815.8 0.0 1.2 2.4 0 70
    2/md4200 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    2) vmstat report
    kthr memory page disk faults cpu
    r b w swap free re mf pi po fr de sr 2m 2m 2m 2m in sy cs us sy id
    1 20 0 39126264 11482648 2570 5039 16336 14 14 0 1 41 38 47 61 71072 41785 30463 51 14 35
    0 16 0 40580912 9157288 273 2366 0 8 7 0 0 198 310 251 192 49207 47702 29823 39 8 53
    0 16 0 40588992 9163784 187 1392 0 8 8 0 0 251 278 212 163 47310 44840 27821 31 7 62
    0 12 0 40597208 9171640 624 3168 0 2 2 0 0 298 329 209 245 49150 44768 26264 29 8 62
    0 13 0 40577400 9158032 425 4452 0 9 8 0 0 177 181 331 292 49149 42545 25544 31 8 61
    0 14 0 40576680 9156328 868 6027 0 8 7 0 0 161 234 259 327 48726 41187 26184 26 9 65
    0 13 0 40567952 9151360 1067 7302 0 3 3 0 0 254 386 256 160 50388 45422 26596 31 10 60
    0 13 0 40565160 9150880 838 6582 0 9 8 0 0 257 289 236 281 49697 45190 26925 31 10 60
    0 12 0 40568616 9153128 640 4880 0 11 10 0 0 334 206 214 214 48738 43552 26431 27 9 65
    0 12 0 40581696 9163248 799 5895 0 2 2 0 0 426 273 138 226 47831 41873 26301 30 9 61
    0 11 0 40572096 9157896 1087 7138 1 10 9 0 0 337 163 220 305 53124 55371 27933 45 11 44
    0 18 0 40520032 9123424 868 5946 0 10 9 0 0 222 218 170 249 51322 49556 27867 40 10 50
    0 17 0 40528544 9130112 481 3257 0 1 1 0 0 276 269 145 316 56103 44359 27645 39 9 51
    0 15 0 40521776 9126208 490 3174 0 10 8 0 0 240 305 226 222 55839 43464 27003 42 10 48
    0 15 0 40491176 9101072 769 4149 0 8 8 0 0 297 362 150 317 59718 55624 34333 43 11 46
    0 17 0 40603696 9183224 785 4364 0 2 2 0 0 314 234 281 238 62990 67554 39122 43 12 45
    0 19 0 40622592 9215816 711 5308 1 12 12 0 0 390 167 252 283 65340 60514 30525 45 12 44
    0 17 0 40662248 9276136 767 5113 0 10 8 0 0 218 280 298 221 63734 53314 31029 43 11 45
    3) SAR CPU Utilization report
    12:30:51 %usr %sys %wio %idle
    12:31:01 41 9 0 50
    12:31:11 43 11 0 46
    12:31:21 42 11 0 47
    12:31:31 44 12 0 44
    12:31:41 42 11 0 47
    Average 42 11 0 47
    Anybody have any comment regarding the reports ?
    Thanks for your help,
    Srikanta Sanpui

    A suggestion: if you use the code tags with your output it will be a great deal easier to read.
    extended device statistics tty
    device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout
    md100 0.3 7.0 2.4 7.0 0.0 0.2 38.5 3 4 0 0
    2/md3100 205.0 27.4 23099.6 472.2 0.0 2.5 10.9 0 79
    2/md3200 225.3 27.3 24993.8 577.3 0.0 2.6 10.5 0 81
    2/md3300 295.8 25.5 31538.0 412.5 0.0 3.1 9.8 0 84
    2/md3400 198.3 25.6 21032.9 423.7 0.0 2.2 9.8 0 65
    2/md3500 55.0 313.0 2397.5 2425.9 0.0 1.2 3.2 0 93
    2/md3600 0.1 19.2 0.8 1440.0 0.0 0.1 5.4 0 10
    2/md3700 3.1 0.8 48.8 35.7 0.0 0.0 9.3 0 4
    2/md3800 300.4 0.7 30651.9 69.4 0.0 2.6 8.7 0 92
    2/md3900 359.9 0.7 35876.8 56.6 0.0 3.3 9.1 0 95
    2/md4100 0.6 512.1 9.6 1815.8 0.0 1.2 2.4 0 70
    2/md4200 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0

  • Improving CLR performance in SQL Server (redux)

    I have been spending a lot of time trying to eek out the maximum performance out of a C# CLR UDF. I have already set IsDeterministic and IsPrecise to true, as well as SystemDataAccessKind.None and DataAccessKind.None.
    I am now experimenting with the overhead of transferring to CLR.  I created a simple CLR UDF that just returns the input value, e.g.,
    [Microsoft.SqlServer.Server.SqlFunction(IsDeterministic=true, IsPrecise=true)]
    public static SqlString MyUDF(SqlString data)
    return data;
    Defined as:
    CREATE FUNCTION dbo.MyUDF(@data nvarchar(4000)) RETURNS nvarchar(4000) WITH EXECUTE AS CALLER
    AS EXTERNAL NAME [MyAssembly].[UserDefinedFunctions].[MyUDF];
    I then use the UDF in a View on the Primary Key (nvarchar) of a table with about 6M rows.
    I know there is a small overhead going through a View versus a Table. However, when I query through the table, it is about 2000% faster than querying through the View with the CLR UDF.  E.g., 3 seconds for the table and 60 seconds for the view! I checked
    the Query Plans for each and they are both using Parallelization.
    I have to assume that all the overhead is in the transition to CLR.  Is that much overhead to be expected?  Is there any way to improve that?
    Incidentally, this is a followup to this question:
    http://stackoverflow.com/questions/24722708/sql-server-clr-udf-parallelism-redux

    Assuming that a way is found to reduce this apparent overhead, what is the intended operation within the function? I ask because the advantages of SqlChars over SqlString might be moot if you will need to operate on the full string all
    at once as opposed to reading it as a stream of characters.
    Also, with regards to why the CLR UDF is so much faster than the T-SQL version, some amount of it certainly could be the ability to participate in a Parallel plan, but also a change was made in SQL Server 2012 that improved performance of deterministic CLR
    functions:
    Behavior Changes to Database Engine Features in SQL Server 2012
          Constant Folding for CLR User-Defined Functions and Methods
          In SQL Server 2012, the following user-defined CLR objects are now foldable:
    Deterministic scalar-valued CLR user-defined functions.
    Deterministic methods of CLR user-defined types.
          This improvement seeks to enhance performance when these functions or methods are called more than once with the same arguments.
    Also, 60 seconds down to 3 seconds is a 95% improvement, not 2000%.  Or you could say that the operation is 20x faster without the UDF.
    Now, outside of that, I recall seeing in another forum that someone was converting their string to VARBINARY using SqlBinary / SqlBytes and then returning VARBINARY and converting it back in T-SQL. Might be worth a test.

  • Performance tuning content server

    Hi,
    I would like to profile my UCM based application. I am using Oracle content server 10g version and has custom java components for my business functionality. I would like to profile this application to identify performance bottlenecks. Is there any profiling tool in market that would get plugged /integrated with oracle content server (UCM 10g)?
    What are the general tips for performance tuning custom components?
    Thanks in advance
    Siva

    Some information can be found in the following document on Metalink :
    Use VisualVM Tools to Troubleshoot UCM and Observe Performance Problems Occurring in Java Virtual Machine (Doc ID 950621.1)

  • Performance of the server

    Hi SDNers,
    I am facing a Performance Issue.
    In my Repository I am having 2.15 MIllion reocrds. IN June The memory usage is 25%.
    Now in August it is comsuming 39% of memory usage.
    Can you anybody tell, How I can analyse and how I can redcue the memory usage of the server.
    Thanks
    Ravi

    Hi Ravi,
    Can you please check this documnet for the same:
    https://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/10c4cd5f-6893-2a10-a2b0-f9cb3cd38a6f&overridelayout=true
    Regards,
    ---Satish

  • How to solve bad performance in vi server

    I'm useing vi server in a recursively programming structure in labview 7.1.
    I have very bad performance result so i made a test comparing the speed of che various vi server call and the native.
    my tests are:
    normal calling.
    vi server calling by reference node
    vi server calling by invoke node
    for each vi server calling type i tried:
    unique reference calling
    different reference calling
    different reference, preloaded,  calling
    the only 1 case i found a loosing of performace acceptable 24% respect the normal calling: vi server calling by reference node with unique reference.
    unfortunately this kind of calling is able to catch come kind of recunsion an forbidden it and this calling wait all the time the end of the execution of the vi called.
    All the oter kinds of caling have very bad performance: from 90% to 99,9% LESS then the normal calling (10 to 1000 times more in execution time).
    In addition there is a prblem in invoke colling: if you execute a invoke node with wait until done true, the mouse pointer fliks with the busy icone. that heppens in all the possible configuration.
    I would like ti know if someuone as some solution to this performance problem and if Labview 8.0 fixs this problems.
    Best regarda.
    ing. Luca Benvegnù

    Hi Luca,
    I've made some researches for you and, what can I say is that there is a certain number of requests about "vi server performances". There are some things that are in general suggested in this situation:
    1) Be "minimalist" about the use of images, graph,... And avoid Large data structures: wiring large structures into/out-of sub-VI's can take time: pass the smallest amount of data.
    2) Avoid repeated calls. Calling the sub-VI's frequently, in a loop for example, create a call over-head that could be an issue. For the same reason put the "wait until" function with a specification of time in order to run loops only as often as strictly required.
    However, about your question of fixing the problem with LabVIEW 8.0, I suggest you to download the Evaluation Version of LabVIEW 8.2 and try by yourself if performances are better.
    I hope I've been of some help.
    carlo>  

  • Performance of BI Server joining multiple of Essbase Cubes

    Hi Guys
    I would be very grateful if you give me some direction on the following issue: I have multiple cubes and a user wants to creata reports using information from more than one cubes. I have two choices:
    First choice is to use Oracle BI server and join cubes
    Second choice is to create separate cubes depends on user requirements
    As you know the second method is not elegant as first, but I am worrying about the performance of the first choice because the aggregation now has to be done in the BI server rather that in the cube (which is good at). The reports are created dynamically. Any ideas pleas?
    Regards
    Chandra

    Depending on the requirements, you could use FR (Financial reporting) with mutiple hidden grids that are joined in a visible grid. or Web analysis with mutiple grids or OBIEE+ or Excel with mutiple retrievals.Or a third part tool like Doceca to bring it together

  • The Performance of Developer Server ??

    Hi:
    In August, Rajs asked: Is WebForms only for Intranet and NOT
    Internet?
    Now, I need urgently answers and details for this question, the
    situation is same in my work. Rajs wrote:
    I have deployed a Developer 6.0 application on the web
    (Internet). I am using Application Server 4.0.7.1 on Windows NT
    SP4. Of course and also JInitiator.
    On a 128 MBPs Internet line The time to access www.yahoo.com was
    around 5 seconds. The time taken to download the Employee
    Details Applet was around 6 minutes. This is certainly not
    acceptable.
    I want to prove that Developer WebForms is also deployable and
    can be used on the Internet as well as the Intranet. I was sadly
    disappointed.
    I have not tweaked any parameters on the Application Server. So
    it is possible I am missing something.Can anybody please comment on this?
    Regards
    Rajs
    null

    Accessing external Databases with RFC-based middleware is Ok, but the best performance you usually get by accessing the external database directly from within the SAP ABAP server with the "DB multiconnect" feature. This allows ABAP programs to read and write to others than the primary database. ABAP knowledge is of cause required.
    SAP has several products that use this feature and that may already fit your needs, e.g. BI (also known as BW) and SCM (also known as APO).

  • Optimizing tempdb performance in SQL Server 2012

    SQL Server 2008 R2 BOL contains a topic on optimizing tempdb: SQL Server 2008 R2 Books Online > Database Engine > Development > Designing and Implementing Structured Storage > Databases > Optimizing Databases > Optimizing tempdb Performance.
    I have been unable to locate a comparable topic in BOL for SQL Server 2012.  Is this type of performance tuning for tempdb no longer necessary in SQL Server 2012?

    Is this type of performance tuning for tempdb no longer necessary in SQL Server 2012?
    That's incorrect. Looks like topic is missing in SQL 2012 BOL. I will alert the author and report back if needed. Meantime, you can follow the same for 2012 as well.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Mac performance with Exchange Server 2010 SP1

    Hi All,
    I have some issue with Mac OS X 10.7.2 performance with when setting up new email connection Microsoft Exchange Server.
    1) Use Apple Mail to create a new EMAIL account for an Exchange Account using Auto discover.  Note how long it takes.  In my case, it takes almost two full minutes or more until it comes back and is fully set up (and starts synchronizing the server folders).
    2) Now use Apple mail create a new EMAIL account MANUALLY -- just don't fill in the password... keep hitting Continue until it shows the POP setting.  Select EXCHANGE and then fill in the appropriate details (mail.mydmain.com, check SSL enabled, etc.).  Setting up an account this way takes only a few seconds.
    3) Use Microsoft Outlook 2011 for mac to create new Email account for an Exchange using Auto discover. it takes few second to completed.
    I can summarize that:
    - Use Apple Mail to create a new EMAIL account for an Exchange Account using Auto discover: Slow
    - Use Apple mail create a new EMAIL account MANUALLY: Normal
    - Use Microsoft Outlook 2011 for mac to create new Email account for an Exchange using Auto discover: Normal
    In any case, do you have any solution to fix this issue.

    Hi,
    I'd like to know how did you set "send on behalf",from outlook give delegation permission or from Exchange Server give "send as" right?
    Please try to use get-adpermission to verify if the user has "send as" right.
    Manage Send As Permissions for a Mailbox
    http://technet.microsoft.com/en-us/library/bb676368.aspx
    Note: After you grant send as right, please try to restart Microsoft Information Store.
    By the way, do you receive any NDR or error information when you send on behalf of others?
    Xiu

  • Write performance in Directory Server 5.0

    Hi,
    is it possible to generate around 350 updates / second with IDS 5.0 ?
    I haven't chosen any Hardware yet, because I can't find anything
    on how to size a Directory Server according to write performance.
    Has someone experience with write performance and how it scales
    using more CPU / RAM ?
    Thanks,
         Sascha
    Sascha Hemmerling eMail:
    [email protected]
    Dweerkamp 13
    24247 Mielkendorf                         Tele: +49-4347-713258

    Were you trying to create a new index and then reintdex the Database....if so Did you check the free space of your database filesystem??because it mentions about space problem for the database..after reindexin

Maybe you are looking for

  • Officejet pro 8600 won't scan through tray

    Good morning, My printer won't scan anything through the the tray on top. I currently have an "HP Officejet Pro 8600" that I use for my mac. When I first set it up over 6 months ago I had no problem scanning anything through the tray on top but now a

  • LDB (Logical Database) results

    Dear All, I have an requirement where I have get the selection screen fields list of logical database(LDB) in my custom program. Also after getting the selection screen fields, whatever criteria is filled in those, once again i have to take those inp

  • QoS for Video Data

    Hello Together, I like to implement a QoS Policy class to prioritise "Windows Netmeeting". Does somebody know the well known port number and can somebody recommend an bandwidth that i should provide at least to this class ? Thanks in advance.

  • SQL Server Expres 2008R2 on Windows 7 peer to peer network using MS Access to connect - ODBC Login Timeout

    Hello, I have had a Microsoft Access database that was running on Windows XP against a SQL Server 2000 database.  We are running on a peer to peer network.  This all worked fine.  Recently I upgraded all the systems to Windows 7 and the Access databa

  • Table Fragmentation and claim tablesapce

    The following scripts are checking Tablespace and Fragmentation: select total.file_name fname, total.bytes/1024 totsiz, nvl(sum(free.bytes)/1024,0) avasiz, (1-nvl(sum(free.bytes),0)/total.bytes)*100 pctusd from dba_data_files total, dba_free_space fr