Quickie Benchmark: Modelsim vs. Vivado Simulator

Guys,
In the hopes of finding a faster simulator I ran an unscientific benchmark on a portion of my design.
I ran for 100,000 clock cycles which is enough to get a few hundred result samples.  Both simulators were run in interactive mode with the wave window open. Both simulators were run in their standard configuration without optimizations for speed. My design is FFT core heavy. Input and output is by textio.
Run Times:
Vivado Simulator = 3 minutes 0 seconds.
Modelsim = 4 minutes 40 seconds
Load times (startup) were about the same for both simulators.
My conclusion is that I cannot reduce simulation times dramatically by just switching from Vivado Simulator to Modelsim.
 

I have some more info on the long simulation time for FFT heavy simulations.
Early in the development process I wanted to be able to display memory contents while debugging.  By default Vivado Simulator does not provide visibility into memories.  You have to set a property in order to make memory contents visible.
set_property -name {xsim.elaborate.debug_level} -value {all} -objects [current_fileset -simset]
I suspected that turning on this property was slowing down my simulation so I ran the same simulation with it enabled and with it commented out.
Simulation time in my small experiment was reduced from 40 seconds to 8 seconds by commenting out this tcl command, a factor of 5 improvement.  I don't know if this improvement scales to long simulations but I suspect so.
Beware of this setting when running long simulations.
 

Similar Messages

  • Vivado simulator is hanging

    Hi,
    I began to learn VHDL, because in the nearer future, I will program a FPGA. At first, I started with ISE. From a tutorial, I got this program: 
    library IEEE;
    use IEEE.STD_LOGIC_1164.ALL;
    use IEEE.NUMERIC_STD.ALL;
    entity TestLED is
        Port ( clk : in  STD_LOGIC;
               led : out  STD_LOGIC);
    end TestLED;
    architecture Behavioral of TestLED is
    signal c:integer:=0;
    signal x:STD_LOGIC:='0';
    begin
        process begin
            wait until rising_edge(clk);
            if(c<24999998) then c <= c+1;
            else
                c <= 0;
                x <= not x;
            end if;
        end process;
        led <= x;       
    end Behavioral;
    ISE created this test bench:
    LIBRARY ieee;
    USE ieee.std_logic_1164.ALL;
    ENTITY tb_TestLED IS
    END tb_TestLED;
    ARCHITECTURE behavior OF tb_TestLED IS
        -- Component Declaration for the Unit Under Test (UUT)
        COMPONENT TestLED
        PORT(
             clk : IN  std_logic;
             led : OUT  std_logic
        END COMPONENT;
       --Inputs
       signal clk : std_logic := '0';
         --Outputs
       signal led : std_logic;
       -- Clock period definitions
       constant clk_period : time := 10 ns;
    BEGIN
        -- Instantiate the Unit Under Test (UUT)
       uut: TestLED PORT MAP (
              clk => clk,
              led => led
       -- Clock process definitions
       clk_process :process
       begin
            clk <= '0';
            wait for clk_period/2;
            clk <= '1';
            wait for clk_period/2;
       end process;
       -- Stimulus process
       stim_proc: process
       begin        
          -- hold reset state for 100 ns.
          wait for 100 ns;    
          wait for clk_period*10;
          -- insert stimulus here
          wait;
       end process;
    END;
    With the ISE simulator and a simulation time of 1 second, I quickly got my results. With Vivado 2015.2 and 64-bit Windows, I wasn't able to create automatically a test bench. So, I inserted the file which ISE created. Using the Vivado simulator, for some few hundred milliseconds, it needed a huge amount of time for the calculation. While this time my CPU usage was often at 0%. When I canceled the simulation, it did not stop and after a while I got a black window. So I had to shut down Vivado with my task manager. I still did not wait for Vivado to come to an end for an 1 second simulation. What's wrong with my programs, Vivado or with the configuration of Vivado?
    Many greetings,
    Andreas

    Hi Bharath,
    I think you misunderstood me. Running ISE and press "Run All", the simulation will run indefinitely. Doing the same with a simulation at Vivado, the simulation will stop after some hundreds milliseconds without any interactions from my side (CPU usage goes to 0%, the nanoseconds stop increasing, "Cancel" showes at the console, it was called, but the simulation does not end). So I found the strange behaviour, if I press "Zoom fit", it continues with the simulation. If I press something different than "Zoom Fit", it may occur that I get a black window and I have to shut down this application with the task manager. So, I come to the conclusion, there may be a bug at Vivado which relates to the graphics engine perhaps for special graphics cards (my adapter: AMD Radeon HD 7770) or there is something wrong with my system, what I still couldn't find out.
    Many greetings,
    Andreas

  • Cannot launch vivado simulator 2015.1: behav/compile.bat' script "Please check that the file has the correct 'read/write/execute' permissions"

    Hi,
    I'm trying to run a verilog simulation using the vivado simulator 2015.1 on Windows 7.
    I get the following error when I attempt to launch simulation:    
    ERROR: [USF-XSim-62] 'compile' step failed with error(s) while executing 'D:/projects/axi/axi_test_system/axi_test_system.sim/sim_1/behav/compile.bat' script. Please check that the file has the correct 'read/write/execute' permissions and the Tcl console output for any other possible errors or warnings.
    The tcl console repeats the same message, "Please check that the file has the correct 'read/write/execute' permissions"
    I cannot find any problem with the permissions.  I believe that windows will always execute a .bat file.   Within the same project, I can run elaboration, synthesis and implementation without problems. 
    Any idea why the simulation compile script won't run?
    Thanks,
    Ed

    Hi,
    Thanks very much for your detailed reply. These were the right questions based upon what I told you.   
    However, I took the code home last night and ran it on my webpack 2014.2 release.   It still failed, but I got completely different error messages.   These messages correctly pointed me to an undeclared signal in my testbench. Once fixed, the compile worked and the simulator launched. 
    This morning, I fixed the signal name in my 2015.1 setup, and it also compiled and launched correctly. 
    So, the problem wasn't actually related to file permissions.  It seems like the 2015.1 error message may be broken compared to 2014.2.  
    I was running the Vivado GUI, clicking on "Simulate > Run Behavioral Simulation"
    Thanks again for your help. 
    Regards,
    Ed  
      

  • Simulation error : size mismatch in mixed language port association with VIVADO simulator

    Hi,
    I have instantiated a VHDL module in  a verilog top file . When I tried to simulate the verilog top , I received the following error .
    ERROR : Size mismatch in mixed language port association , vhdl port  vid_data
    (Simulation tool : VIVADO simulator . VIVADO ver : 2015.1)
    // Following is the instantiation of  VHDL module in verilog top file
    VPS  VPS_inst 
         .clk (VPS_clk),
         .reset_n(~user_reset),
         .vid_active_video(data_valid),
         .vid_data(data_to_mem)
    The port 'vid_data' is declared in the VHDL module as std_logic_vector (15 downto 0)
    "vid_data   : out std_logic_vector(15 downto 0)"
    'data_to_mem' is declared in verilog top file as  "wire  [15:0]   data_to_mem" . 
    No size mismatch exists actually . But , I am getting the above mentioned error in simulation.
    I have searched for similar threads . Nothing was useful . 
    Does anyone know how to solve this?
    Thanks and Regards
    Raisa
     

    You might also get this error if you mis-spelled "data_to_mem" such that the declaration did not match the instantiation port map.  For example:
    wire  [15:0] data__to_mem;  // double underscore before "to"
    VPS  VPS_inst 
         .clk (VPS_clk),
         .reset_n(~user_reset),
         .vid_active_video(data_valid),
         .vid_data(data_to_mem)  // only one underscore before "to"
    In Verilog this is not an error unless you disable automatic net inference.  In this case Verilog is happy to create a single wire for data_to_mem, but then you would be trying to attach a 1-bit wire to a 16-bit port.  That would also be valid in Verilog, but not allowed for connections to VHDL.
    I typically avoid this sort of error by placing:
    `default_nettype none
    at the top of each Verilog file, and
    `default_nettype wire
    at the bottom of each Verilog file.  This prevents the automatic creation of wires when you mis-spell or forget to declare nets.

  • Anyone using an SSD under Arch? Please post a quick benchmark

    I'm considering an SSD for my Linux root and /home.  If you're using an SSD under Arch, can you please run a quick benchmark for me and post the results in this thread?  You can use hdparm to do a quick read test.  Here is an example using my HDD.
    # hdparm -Tt /dev/sda
    /dev/sda:
    Timing cached reads: 14500 MB in 2.00 seconds = 7258.88 MB/sec
    Timing buffered disk reads: 368 MB in 3.01 seconds = 122.16 MB/sec
    Beyond that, have you noticed any substantial speed ups over an HDD, like launching programs, booting the system, etc.

    Perhaps use of IDE-MODE flash devices will provide best answers with the hdparm measure.  It does a sequential test which is hardly normal for operating systems.
    Usual performance from ide mode flash devices is 45mb/sec but in practice may be a bit less in hdparm.  Two ide mode flash drives in raid0 give me less than 90 mb/s.  My use is for quick access storage, mostly read-only.  When first installed, 90 mb is possible but within a few weeks, it lessens to the mid 80"s even with read-only.
    The performance over time lessens if the units are used as SSD and such is normal wear and tear.  Many articles cover the subject and the methods used to minimize the effects.
    Much is going forward with SSD and expectation is that in a few years they will be in wide use.  This remains to be seen!
    You may be aware of all so excuse my oops!

  • Can we have a flush cach button in Vivado simulator PLEASE

    Im getting regularly a problem where what vivado seems to be working on is no the files edited externaly 
       no matter how long i leave vivado to catch up,
    just had oen where I went for lunch for 30 minutes, and the same symptoms, 
        the way to 'fix' this when one notices is to exit vivado, delete the cach and sim folders, and re start vivado.
    What I'm doing is a lot of testing / what iff stuff in vivado simualtor.
    lots of changing a few bits of code, and run the simulator, to see on gui and files written to the changes, 
       a lot of what if stuff.
    So yes I'm doing lots of "re launch simulator " button pressing.
    file are edited outside of Vivado, in fact the source VHDL files are not on the vivado machine, and the editing is done via another machine that the source is stored on.
    The symptoms are, you do a change to the RTL , and press the re launch, and the output of the sim you get looks like it has not changed, when you expected it to have. 
    you do a few more significant changes, and re launch, and the same. the output has not changed, 
       Vivado is using the files it has 'in memory'.
    If I quit, clear and re launch, I find that the simulator declares an error. 
        this las time I had missed off a ; of an end of lin in vhdl.
    obvious, BUT
       when I was doing re launch, vivado was NOT flaging that up, but carrying on and giving us a new GUI , but using the old RTL.
    It just seems that the CACHE gets out of sync. But vivado simulator seems to compile all files every time it re launches,
       confusing.
     

    Yes I think this makes sense. I will check if we can file a enhancement for this.
    Thanks for bringing this to us.
    Regards
    Sikta

  • Aperture 3: A quick benchmark for buying decision

    I'm in the unlucky situation, that my workhorse machine for Aperture is still a well equipment PowerMac Quad G5. I can't use Aperture 3 on this machine as Aperture 3 is Intel only.
    I've read several posts on boards discussing Aperture 3, that Apple did right in abandoning PPC code, because the G5 would simply no be powerful enough to run Aperture 3 efficiently.
    So I did a small test. Installed the Aperture 3 trial on my MacBook Pro, which is officially supported by Aperture 3 and exported 34 heavy edited RAW pictures to 50% downscaled JPEGs.
    Here is the result:
    Quad G5 / Aperture 2: 2:14 Min
    MacBook Pro 2.4 GHz early 2008 / Aperture 3: 3:14 Min
    So what is the conclusion?
    1.) A Quad G5 would have been more than powerful enough to run Aperture 3, if Apple had made it an universal app. Sadly they decided against and I have to live with this decision.
    2.) Using the MacBook Pro with Aperture 3 is no option for me and not only due to limited display size and hard disc size/speed. According to this benchmark, it would be a serious performance downgrade in compare with the G5 Quad/Aperture 2 combo I use now.
    3.) A 2,66 GHz Quad-Core Mac Pro (the cheapest Quad G5 successor) would run Aperture 3 and would cut the time for my benchmark probably by halve (given 4 cores vs. 2 on the MBP). That would mean probably 1:40 Min to complete the test. Yeah, faster than 2:14 Min. but frankly not that much.
    If I spend a lot of money to replace my Quad G5 with an Aperture 3 compatible desktop Mac, I expect it to be noticeable faster than my current machine - or I stick with the G5 and Aperture 2 for another year.
    So my question:
    Is there someone with a fast Mac Pro and/or an i5 or i7 iMac and has still access to an Quad G5 or a 2.4 GHz MacBook Pro, who could make a similar benchmark test to compare with?
    Peter

    It shouldn't be a surprise that, years after switching to Intel, that Apple now develops software for Intel machines. It doesn't develop software for my old Apple //e with a 6502 chip, either. I've moved on and so has Apple.
    William, no offense against your reply, but people often speak, as if the PPC in the Mac platform had been superseded by Intel CPUs almost a decade ago, and so it has to be expected now, that this architecture isn't supported any more.
    The truth is:
    - It wasn't before August 2006, that the crown for the "fastest Mac ever" went from the Quad G5 to the Mac Pro.
    - And it wasn't before March 2007 with the release of Adobe CS3, that the Quad G5 lost the crown as the fastest machine for running Photoshop, InDesign etc.
    So until March 2007 the Quad G5 was the best performing machine for creative professionals.
    In August 2009 Apple dropped PPC support with the release of Snow Leopard and as we know now for all future Pro apps as well. No surprise, other major software vendors like Adobe quickly follow Apple's route and abandon PPC support too, as we see in the Lightroom 3 beta and the next version of CS.
    So it is just about 2 years for the Quad G5 to move from the best performing machine for creative professionals to a machine being totally unusable to run current software for creative professionals.
    While I fully understand, that Apple doesn't want to waste resources with coding for a plattform, they don't sell any more, the often praised longevity of Mac system in compare to windows system is really a joke in this special case. My 6 years old windows box is better supported by current software than my now 5 years old Quad G5.
    Ok, let's stop here with the discussion (whining ) about the drop of PPC support. Apple's decision is made and it is final, so it is time to move forward.
    My lesson from this story: I'll never buy legacy hardware from Apple again. They are just "too innovative" to make a good use of legacy hardware over its usual lifetime.
    Peter

  • How to show waveform in Vivado simulation window? The waveform is absent for some signals.

    Hi,
    I have the following verilog code:
    `timescale 1ns / 1ps
    module top(
    reg clk = 1;
    always
    #1.25 clk <= ~clk;
    wire temp;
    //wire temp2; // added later
    assign temp = clk;
    //assign temp2 = clk; //added later
    endmodule
     When I simulate this, I see "CLK" as expected but "temp" just has the value 1 and no waveform. When I uncomment the lines marked as "//added later", I can see "CLK" and "temp2" waveforms as expected. But "temp" is still stuck at 1 and there's not waveform for it.
    Then I commented the lines marked as "//added later" again and there's not waveform for "temp". Then I renamed "temp" to "temp2" and simulated. This time I could see "CLK" and "temp2" waveforms in the simulation window toggling as expected.
    It could be that I'm just not seeing the waveforms. For example in the first scenario, I'm just seeing a "1" and no waveform. There are also some other signals that when added to the wave window don't show a waveform. How can I enable the waveform?
    Thanks,
     

    I tried using a separate module for clock generation as you suggested. In my top level module I instantiate two clocks "CLK_20MHz" and "CLK_25MHz". When I simulate I can see the waveform of both clocks. Then, from the clock module, I add the register "CLK". However, I see no waveforms for "CLK". But when I add "$monitor("%d,\t%b",$time, CLK);" inside the clock module, I can see in the logs that "CLK" is actually toggling as expected. I don't know why the simulation window doesn't show the waveform? And how I can show/enable the waveform?  

  • Vivado simulation - block memory module failure

    Hi,
    I'm simulating a project with my own IPs. one of the IPs has a block memory generator (8.2).
    The simulation stops after 35 ns and tck message:
    Block Memory Generator module TOP030815.design_1_i.golgol_0.U0.blk_mem_gen_GOLAY_inst.inst.native_mem_module.blk_mem_gen_v8_2_inst is using a behavioral model for simulation which will not precisely model memory collision behavior.
    Failure: ERROR:add_1 must be in range [-1,DEPTH-1]
    Time: 35 ns Iteration: 2
    $finish called at time : 35 ns : File "../../../project_1.srcs/sources_1/ipshared/ornim.medical/golgol_v1_0/a6138b30/hdl/golgol_v1_0.vhd" Line 93
    xsim: Time (s): cpu = 00:00:05 ; elapsed = 00:00:34 . Memory (MB): peak = 980.820 ; gain = 37.723
    INFO: [USF-XSim-96] XSim completed. Design snapshot 'TOP030815_behav' loaded.
    what is the problem? 
    Thanks,
    Danna

    Did you use the AXI4 interface for the block memory generator IP?
    This kind of failure is typically seen when master and slave AXI are not initialized. e.g. when the slave TVALID and TDATA are in an 'U' or 'X' state.
    You need to firstly examine the testbench (or the AXI bus drivers) and ensure they're all initialized.
    For instance,
      -- Data slave channel signals
      signal s_axis_data_tvalid              : std_logic := '0';  -- payload is valid
      signal s_axis_data_tready              : std_logic := '1';  -- slave is ready
      signal s_axis_data_tdata               : std_logic_vector(15 downto 0) := (others => '0');  -- data payload
      -- Data master channel signals
      signal m_axis_data_tvalid              : std_logic := '0';  -- payload is valid
      signal m_axis_data_tdata               : std_logic_vector(23 downto 0) := (others => '0');  -- data payload

  • Vivado 2015.2: Simulation and synthesis reverse bit order of std_logic_vector in logical operators

    In both simulation and synthesis the logical operators on std_logic_vector bit-reverse the operands in the result, at least in the case where the result of the expression is passed to a function.
    I suspect this issue applies to other operators as well, though I have only tested the problem with the logical operators. I also suspect the issue exists in other situations where a function is not involved. 
    This is incompatible with both ModelSim simulation and XST synthesis and it breaks a lot of our code.
    The attached xsim7.vhd example shows the issue. The xsim7.tcl script will run the Vivado simulation. Under Windows, the xsim7.bat will run the whole thing. The xsim7.xpr project file allows synthesis under Vivado, and you can look at the schematic to see the issue in the synthesized netlist.
    The source module also simulates under ModelSim and synthesizes under XST, and these show what I believe to be correct behavior.
    Can someone please verify this error and either file a CR or tell me to file an SR?
    Ian Lewis
    www.mstarlabs.com

    Hello Bharath,
    When I said "what was index 'LOW becomes index 'HIGH" I meant the 'LOW of the source operands and the 'HIGH of the result, independent of the actual index range. I would have no problem with any 'HIGH and 'LOW on the result of the operator as long as the direction matched the left source operand, though personally I would prefer the same 'HIGH and 'LOW as that of the left operand.
    Looking at the IEEE implementation of logical "or" on std_logic_vector from package std_logic_1164: https://standards.ieee.org/downloads/1076/1076.2-1996/std_logic_1164-body.vhdl:
    -- or
    FUNCTION "or" ( l,r : std_logic_vector ) RETURN std_logic_vector IS
    ALIAS lv : std_logic_vector ( 1 TO l'LENGTH ) IS l;
    ALIAS rv : std_logic_vector ( 1 TO r'LENGTH ) IS r;
    VARIABLE result : std_logic_vector ( 1 TO l'LENGTH );
    BEGIN
    IF ( l'LENGTH /= r'LENGTH ) THEN
    ASSERT FALSE
    REPORT "arguments of overloaded 'or' operator are not of the same length"
    SEVERITY FAILURE;
    ELSE
    FOR i IN result'RANGE LOOP
    result(i) := or_table (lv(i), rv(i));
    END LOOP;
    END IF;
    RETURN result;
    END "or";
    the operator does exactly what Vivado seems to be doing: alias the downto source vectors to a to range of 1 to 'LENGTH and then return a vector of 1 to 'LENGTH. This both bit reverses the downto indexes (the alias on the source operands does that) and changes the range to "to" with a 'LOW value of 1.
    This gives us the 'LEFT 1 and 'RIGHT 4 we see.
    So far, I have found no definition from IEEE of what logical operators, such as "or", are supposed to do on std_logic_vector except as defined by this piece of code for "or" and the other operators' associated bodies from the std_logic_1164 package.
    What this code does seems like a horrible decision about how to implement the logical operators on std_logic_vector with respect to range direction, but it is compatible with what Vivado does, and incompatible with what XST does. (I had never investigated this issue before because what ModelSim and XST did made perfect sense to me.)
    That the assignment  (Result := s)  works as expected, makes sense. The assignment of the "to" range s to the "downto" range Result maintains the 'LEFT relationship. That is, Result'LEFT (index 3) receives s'LEFT (index 1). So, the bits are reversed a second time.
    That ISIM does not match what XST does seems like a defect no matter how you look at it. Your simulation can never match your synthesis if you care about the range direction inside a function (and I suspect in other places too). This deserves an SR or CR if you are still doing any work on those two tools. I think you may not be.
    Are you able to find out whether Vivado was changed from XST on purpose with respect to the behavior of logical operators?
    If this is "as designed" then I have to start working on updating our code to live with it. But, if this is something that happened by accident, and you will change Vivado to match XST, then I probably want to wait. Updating our code to live with this behavior is going to be a pretty big job.
    Thank you for your help,
    Ian

  • [USF-XSim-62] 'elaborate' step failed with error(s) at vivado 2015.2 Behavioural Simulation

     [USF-XSim-62] 'elaborate' step failed with error(s). Please check the Tcl console output or 'C:/xx/axi_pci/axi_pci.sim/sim_1/synth/func/elaborate.log' file for more information.

    As stated above, please provide more information if you need any inputs from our end. As it is Vivado Simulator, you can also refer to the Messages tab and see what is the error. Most of the time if elaboration fails then it is possible that some modules were missing or the component/entity binding has failed or the compilation order is not proper. You can also refer to the compile.log to see if there are any warnings. All the logs can be found in the sim_1/behav folder.

  • VMWare Fusion and Parallels Desktop Benchmark Comparison

    This is a quickie benchmark of VMWare Fusion and Parallels Desktop using Super PI, PC Mark 05, and Passmark.
    VMWare Fusion 36932
    Parallels Desktop 3094 Beta 2
    Notes:
    Both virtual machines were allocated with large 10+ GB virtual disks and 640MB of RAM. The VMWare CPU was configured with two processors. The Parallels CPU was configured with 1 (two is not available). VMWare reported the CPU as 1 physical, 2 logical processors running at 2.66 GHz while Parallels reported 1 physical, 1 logical processor running at 9.6 GHz (the combined speed of all four cores on the Mac Pro). The max observed CPU utilization in activity monitor when running under VMWare was 200% and max under Parallels was 173%.
    I chose not to compare 1 VMWare CPU vs. 1 Parallels CPU. While Parallels does not support SMP or multithreaded processes on multiple processors the CPU utilization on the Mac went well above 1 core (173%). For this comparison, I wanted to see results of max processing based on what the two vendors have delivered, as opposed to benchmarking the underlying "virtual or hypervisor cpu" on a 1:1 basis. This explains why VMWare was 2x faster than Parallels on some CPU tests.
    Both of these products are beta. VMWare is running in debug mode (can not be turned off in this beta).
    Caveat emptor on these stats. This was an unscientific exercise to satisfy my curiosity. Some of the extraordinary differences are highlighted with <--.
    Platform:
    Mac Pro 2.66 GHz, 2GB RAM, Nvidia 7300GT
    Disk 1 - OS X, 73GB Raptor
    Disk 2 - dedicated disk where each virtual machine image was created separate from the OS or any OS-related virtual memory files.
    VMWare and Parallels guest OS: Windows XP Professional, SP 2
    Comparison Benchmrk
    VMWare Fusion 36932 and Parallels Desktop 3094 Beta 2
    Super PI Parallels VMWare
    512K 8s 9s
    1M 20s 21s
    4M 1m 57s 2m 03s
    PC Mark 05 Parallels VMWare
    CPU Test Suite N/A N/A
    Memory Test Suite N/A N/A
    Graphics Test Suite N/A N/A
    HDD Test Suite N/A N/A
    HDD - XP Startup 5.0 MB/s 19.54 MB/s <--
    Physics and 3D Test failed Test failed
    Transparent Windows Test failed 69.99 Windows/s
    3D - Pixel Shader Test failed Test failed
    Web Page Rendering 3.58 Pages/s 2.34 Pages/s
    File Decrypt 71.73 MB/s 67.05 MB/s
    Graphics Memory - 64 Lines 179.92 FPS 111.73 FPS
    HDD - General Usage 4.82 MB/s 42.01 MB/s <--
    Multithread Test 1 / Audio Comp N/A N/A
    Multithread Test 1 / Video Encoding Test failed Test failed
    Multithread Test 2 / Text Edit 152.85 Pages/s 138.48 Pages/s
    Multithread Test 2 / Image DeComp 5.91 MPixels/s 35.4 MPixels/s <--
    Multithread Test 3 / File Comp 3.22 MB/s 6.03 MB/s
    Multithread Test 3 / File Encrypt 19.0 MB/s 33.26 MB/s <--
    Multithread Test 3 / HDD - Virus Scan 27.91 MB/s 25.49 MB/s
    Multithread Test 3 / Mem Lat - Rnd 16MB 5.34 MAcc/s 6.63 MAcc/s
    File Comp N/A N/A
    File DeComp N/A N/A
    File Encrypt N/A N/A
    File Decrypt N/A N/A
    Image DeComp N/A N/A
    Audio Comp N/A N/A
    Multithread Test 1 / File Comp N/A N/A
    Multithread Test 1 / File Encrypt N/A N/A
    Multithread Test 2 / File DeComp N/A N/A
    Multithread Test 2 / File Decrypt N/A N/A
    Multithread Test 2 / Audio DeComp N/A N/A
    Multithread Test 2 / Image DeComp N/A N/A
    Memory Read - 16 MB N/A N/A
    Memory Read - 8 MB N/A N/A
    Memory Read - 192 kB N/A N/A
    Memory Read - 4 kB N/A N/A
    Memory Write - 16 MB N/A N/A
    Memory Write - 8 MB N/A N/A
    Memory Write - 192 kB N/A N/A
    Memory Write - 4 kB N/A N/A
    Memory Copy - 16 MB N/A N/A
    Memory Copy - 8 MB N/A N/A
    Memory Copy - 192 kB N/A N/A
    Memory Copy - 4 kB N/A N/A
    Memory Lat - Rnd 16 MB N/A N/A
    Memory Lat - Rnd 8 MB N/A N/A
    Memory Lat - Rnd 192 kB N/A N/A
    Memory Lat - Rnd 4 kB N/A N/A
    Transparent Windows N/A N/A
    Graphics Memory - 64 Lines N/A N/A
    Graphics Memory - 128 Lines N/A N/A
    WMV Video Playback N/A N/A
    3D - Fill Rate Multi Texturing N/A N/A
    3D - Polygon Throughput Multiple Lights N/A N/A
    3D - Pixel Shader N/A N/A
    3D - Vertex Shader N/A N/A
    HDD - XP Startup N/A N/A
    HDD - Application Loading N/A N/A
    HDD - General Usage N/A N/A
    HDD - Virus Scan N/A N/A
    HDD - File Write N/A N/A
    Processor Intel Core 2 9653 MHz Processor Unknown 2661 MHz
    Physical / Logical CPUs "1 Physical, 1 Logical" "1 Physical, 2 Logical"
    MultiCore 1 Processor Core Multicore 2 Processor Cores
    HyperThreading N/A N/A
    Graphics Card Generic VGA Generic VGA
    Graphics Driver Parallels Video Driver VMWare SVGA II
    Co-operative adapters No No
    DirectX Version 9.0c 9.0c
    System Memory 640 MB 640MB
    Motherboard Manufacturer N/A Intel Corporation
    Motherboard Model N/A 440BX Desktop Reference Platform
    Operating System Microsoft Windows XP Microsoft Windows XP
    Passmark Parallels VMWare
    CPU - Integer Math (MOPS) 112.35 230.31 <--
    CPU - Floating Point Math (MOPS) 280.46 588.33 <--
    CPU - Find Prime Numbers (OPS) 446.37 676.99 <--
    CPU - SSE/3DNow! (MMPS) 2118.56 4737.13 <--
    CPU - Comp (KB/s) 2994.16 5952.34 <--
    CPU - Encrypt (MB/s) 18.09 36.27 <--
    CPU - Image Rotation (IRPS) 598.21 1184.41 <--
    CPU - String Sorting (TPS) 2118.81 3672.59 <--
    Graphics 2D - Lines (TPS) 220.71 25.15 <--
    Graphics 2D - Rectangles (TPS) 189.74 61.8 <--
    Graphics 2D - Shapes (TPS) 39.54 13.71 <--
    Graphics 2D - Fonts and Text (OPS) 190.39 75.88 <--
    Graphics 2D - GUI (OPS) 439.77 63.72 <--
    Memory - Allocate Small Block (MB/s) 2533.83 2526.21
    Memory - Read Cached (MB/s) 1960.5 1906.27
    Memory - Read Uncached (MB/s) 1871.79 1826.08
    Memory - Write (MB/s) 1687.81 1545.43
    Memory - Large RAM (OPS) 60.99 46.37
    Disk - Sequential Read (MB/s) 102.11 76.45 <--
    Disk - Sequential Write (MB/s) 58.33 50.9
    Disk - Rnd Seek + RW (MB/s) 51.4 40.4
    CPU Mark 711.08 1432.72 <--
    2D Graphics Mark 743.31 176.5 <--
    Memory Mark 599.94 580.38
    Disk Mark 766.11 606.7
    PassMark Rating 557.27 637.35<br>

    Thanks for posting these numbers - it's an interesting comparison.
    I would expect the final VMWare fusion performance numbers to be quite a bit better than that of Parallels - they have almost a decade's worth of experience more than the Parallels folks in this arena, and a much larger development team to boot.
    Once VMWare Fusion is released to the public, I think that you'll see a clearer distinction between the two products. VMWare will continue to appeal to the professional customer, with a more robust feature set and corporate-friendly features (and a correspondingly higher price tag); Parallels will fall more into the consumer/VirtualPC-replacement market. It will be interesting to see how Parallels will be affected when (and if) VMWare player is ported to OS X.
    Interesting about the Parallels performance stats on a native partition - looks like almost enough reason to avoid the bootcamp partition approach altogether. Sharing a native windows installation with a VM in parallels is a pretty scary situation in any case, as the two environments have entirely different hardware configurations. Do-able, but there is some black magic involved (if you want to see an example of what I mean, try to move a windows installation from one machine to another w/different hardware sometime - it ain't pretty); I wouldn't try this in a production scheme unless I had REALLY good backups.

  • DB2 UDB Compression vs performance improvements benchmarks

    Hi
    Are there any bench mark results on the effect of DB2 UDB data compression. Though by theory, this is supposed to improve the performance, even after considering the trade off in the data compression, I am not seeing any real life results on this.
    If there are any, particularly for the BI 7.0, please let me know.
    Thanks
    Raj

    Hello,
    >> http://www-306.ibm.com/software/success/cssdb.nsf/CS/STRD-6Z2KE3?OpenDocument&Site=corp&cty=en_us
    at first i wanna say, that i have no experience and knowledge in DB2, but if i read some statements likes this - i do not wonder about some management discussions/decisions:
    The move from our existing version of DB2 to DB2 9 was very simple, so we didn’t need any external help. Because DB2 9 is backwards-compatible, there was no need to upgrade our SAP software
    With the STMM, we can tune the buffer pool automatically, which saves considerable time for our database administrators – reducing their workload by around 10 per cent.”
    Our database is now 43 per cent smaller than before, and some of the largest tables have been reduced by up to 70 per cent
    With DB2 9, our two-person IT team can handle database administration on top of all their other work, even without much specialist knowledge
    Please correct me, if i am wrong and DB2 can really handle this:
    1) Is the upgrade to DB2 9 really a big step or anything really hard to handle? I don't think so...
    2) Are memory settings really so time-consuming in DB2, that a SAP administrator is spending 4 hours per week on that (if he works 40 hours per week)? I don't think so..
    3) I have read some documents about the compression feature and it really seems to be great, but maybe an reorg would also make a benefit from round about 10-20%? All these aspects are not mentioned anywhere.
    4) DB administration without special know-how in DB2? I could not believe that.... if you want a fast and stable environment
    I am also very interested in some performance facts / data with compression and performance issues in DB2, but i am searching for real-data and not only some "manager" statements.
    What i am searching for is a benchmark with defined load simulations and analysis like this one from oracle:
    http://regmedia.co.uk/2007/05/20/ibmpower6.pdf
    Does anyone have some links to that kind of benchmarks?
    Regards
    Stefan

  • Why does Modelsim not display some signals?

    Hi,
    I use Xilinx ISE to run a small VHDL file. From Xilinx ISE goes to Modelsim in behaviour simulation (use the default do {hcic_tb.fdo} command), I find that thw waveform window does not display the following two signals:
      SIGNAL filter_out_addr                  : std_logic; -- boolean  
      SIGNAL filter_out_done                  : std_logic; -- boolean
    while   SIGNAL filter_out_rdenb                 : std_logic; -- boolean
    shows in the waveform window. What is the reason for display or not?
    Thanks
    ............ testbench file:
    SIGNAL filter_out_rdenb                 : std_logic; -- boolean  
    SIGNAL filter_out_addr                  : std_logic; -- boolean  
    SIGNAL filter_out_done                  : std_logic; -- boolean
    la: filter_out_rdenb <= ce_out;
      filter_out_procedure (    
       clk       => clk,    
       reset     => reset,    
       rdenb     => filter_out_rdenb,    
       addr      => filter_out_addr,    
       done      => filter_out_done);

    It seems these two signals will display after I turn off optimization to vsim:
    -novopt
    My new question is that in testbench it is acceptable to test a point with signal? Why are they get optimized?  I want to ensure my future test signal is displayed without optimization turned off. Is it possible?
    Thanks, Show trimmed content

  • Simulation engine failed to start: A valid license was not found for simulation

    Hi,
    I'd like to run xsim by clicking "Run Simulation" in Vivado 2015.2 . However, in the Tcl console I see this error:
    INFO: [Common 17-186] '/home/rob/xilinx-projects/bright_proj/bright_proj.sim/sim_1/behav/xsim.dir/ProgNetwork_behav/webtalk/usage_statistics_ext_xsim.xml' has been successfully sent to Xilinx on Mon Jul 13 16:01:02 2015. For additional details about this file, please refer to the WebTalk help file at /home/rob/sw/xilinx/xilinx-build/Vivado/2015.2/doc/webtalk_introduction.html.
    INFO: [Common 17-206] Exiting Webtalk at Mon Jul 13 16:01:02 2015...
    run_program: Time (s): cpu = 00:00:08 ; elapsed = 00:00:10 . Memory (MB): peak = 6669.582 ; gain = 0.000 ; free physical = 763 ; free virtual = 18006
    INFO: [USF-XSim-4] XSim::Simulate design
    INFO: [USF-XSim-61] Executing 'SIMULATE' step in '/home/rob/xilinx-projects/bright_proj/bright_proj.sim/sim_1/behav'
    INFO: [USF-XSim-98] *** Running xsim
    with args "ProgNetwork_behav -key {Behavioral:sim_1:Functional:ProgNetwork} -tclbatch {ProgNetwork.tcl} -log {simulate.log}"
    INFO: [USF-XSim-8] Loading simulator feature
    Vivado Simulator 2015.2
    ERROR: [Simtcl 6-50] Simulation engine failed to start: A valid license was not found for simulation. Please run the Vivado License Manager for assistance in determining which features and devices are licensed for your system.
    Please see the Tcl Console or the Messages for details.
    However, if you look at the attached image, you'll see that my installed licenses include the license name "Simulation".
    Why is Vivado complaining about "Simulation engine failed to start: A valid license was not found for simulation" ?
    Thank you,
    Rob
     

    Hi Rob,
    The reason for the license error is: Hostid mismatch. i.e., the hostid of the machine is different from the hostid in the license file.
    I observe that the license file is generated with hostid: 52540019ee7e. You can cross check your hostid of the machine by running the command: lmutil lmhostid
    You can rehost the license file using the steps mentioned in the section "Rehost or change the license server host for a license key file" in the following user guide: http://www.xilinx.com/support/documentation/sw_manuals/xilinx14_5/irn.pdf
    Thanks,
    Vinay

Maybe you are looking for