Benchmark ADC1,DAC1,G5 OS 10.4.6 and YAMAHA SYNTH S30 with Edirol UM-1EX

I realy need help from anyone out there. Any help would be greatly appreciated.I connected ADC1 optical out to G5 opt-in from G5 opt-out to DAC1 opt-in and YAMAHA synth S30 MIDI I/O to G5 USB in by EDIROL UM-1EX MIDI-USB convertor. First the MIDI worked ,but Audio from Mic Pre to ADC1 to G5 from G5 to DAC1, I got repeat sound like a long delay sound,and now even the MIDI also stops working. I all the G5 I mentioned above is only one computer. I also ran Logic Audio MIDI setup assistent together with Driver for the EDIROL UM-1EX.
Please note, UM-1 and UM-1EX use the same driver.
THANKS.
G 5   Mac OS X (10.4.6)  

Similar Messages

  • FASTER THROUGH PUT ON FULL TABLE SCAN

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-10
    Subject: Faster through put on Full table scans
    db_file_multiblock_read only affects the performance of full table scans.
    Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
    db_file_multiblock_read must be less than or equal to 64KBytes.
    If your query is really doing an index range scan then the performance
    of full scans is irrelevant. In order to improve the performance of this
    type of query it is important to reduce the number of blocks that
    the 'interesting' part of the index is contained within.
    Obviously the db_blocksize has the most impact here.
    Historically Informix has not been able to modify their database block size,
    and has had a fixed 2KB block.
    On most Unix platforms Oracle can use up to 8KBytes.
    (Some eg: Sequent allow 16KB).
    This means that for the same size of B-Tree index Oracle with
    an 8KB blocksize can read it's contents in 1/4 of the time that
    Informix with a 2KB block could do.
    You should also consider whether the PCTFREE value used for your index is
    appropriate. If it is too large then you will be wasting space
    in each index block. (It's too large IF you are not going to get any
    entry size extension OR you are not going to get any new rows for existing
    index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)
    db_file_simultaneous_writes has no direct relevance to index re-balancing.
    (PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
    HP/Allbase for the database server application that HP uses internally to
    monitor and control it's Tape drive manufacturing lines. They chose
    Oracle because: We outperformed Informix.
                             Sybase was too slow AND too
    unreliable.
                             Unify was short on functionality
    and SLOW.
                             HP/Allbase couldn't match the
    availability
                             requirements and wasn't as
    functional.
    Informix had problems demonstrating the ability to do hot backups without
    severely affecting the system throughput.
    HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
    different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
    configurations.
    NNB: It's always worth throwing in a simulated system failure whilst the
    benchmark is in progress. Informix has a history of not coping gracefully.
    That is they usually need some manual intervention to perform the database
    recovery.)
    I have a perspective client who is running a stripped down souped version of
    informix with no catalytic converter. One of their queries boils down to an
    Index Range Scan on 10000 records. How can I achieve better throughput
    on a single drive single CPU machine (HP/UX) without using raw devices.
    I had heard rebuilding the database with a block size factor greater than
    the OS block size would yield better performance. Also I tried changing
    the db_file_multiblock_read_count to 32 without much improvement.
    Adjusting the db_writers to two did not help either.     
    Also will the adjustment of the db_file_simultaneous_writes help on
    the maintenance of a index during rebalancing operations.

    2)if cbo, how are the stats collected?
    daily(less than millions rows of table) and weekly(all tables)There's no need to collect stats so frequently unless it's absolute necessary like you have massive update on tables daily or weekly.
    It will help if you can post your sample explain plan and query.

  • Can someone POST Plug-In count TIGER vs LEOPARD?

    I know I have read conflicting posts but have never seen anything concrete from say barefeats or macworld regarding AUDIO benchmarks.
    Would appreciate reading track (virtual instruments) and plug-in count with TIGER vs LEOPARD.
    Thanks.

    According to Apple's system requirements I need to upgrade to either Tiger or Leopard to get Logic Studio to run on my G4. Does anyone with a simlilar machine have advice of whether 10.4 or 10.5 will be my best bet? -Thanks

  • Single Core MBP

    Let's see, I have resorted to installing the developer tools and going "core solo" to get rid of my CPU hiss. Naturally, now I'm wondering how drastic the performance defecits are. Anyone have any idea?

    I guess you could get hold of one of the various benchmark programs that people run to test performance and then run these with both single and dual core and see the difference.
    In the real world, it's entirely your perception that matters. If you feel that performance on a single core is adequate for your needs and makes you feel comfortable due to lack of hiss then that's all that matters. If you are doing a processor intensive task (e.g. encoding video, working in iMovie, etc.) then you could just consciously enable the second core at that time to get more power, the hiss won't be there at such times due to high processor activity. Once the task is finished, you can disable one core again.
    I personally think it is regretable that users pay good money for a "pro" dual core machine and then have to resort to crippling the full potential of that machine in order to work effectively and peacefully on it. Alas, we all have to do what it takes to achieve peace of mind. Enjoy your silent single core computing experience!

  • Problem with my MSI 290x gaming 4g performance in unigine valley benchmark.

    First im dutch(netherlands) so sorry for my maybe bad english.
    My system is:
    Asus p8z77 v deluxe mb
    intel 3770k cpu
    16gb ram 1866 Corsair
    2x ssd 256 gb OCZ
    MSI 290x gaming
    win7 64bit ultimate.
    When i came home with my card i removed my old 7970 and replaced it with my new card.
    Put some programs up like gpu-z cpu-z realtemp ext.
    Then run Unigine-valley Benchmark with OC tool MSI at 1040 option.
    My cpu at 4.2ghz OC
    Score Unigine valley v1.0: 61.1 fps-score2558-min fps 30.3/max fps 112.8
    Custom settings at 1920-1080 and ultra 8x aa windowed.
    Seems ok result.
    Now to problem i have.
    So next i did (dumb me hehe) i try OC my CPU with All-suit II from ASUS to OC cpu extreme mode which stress test untill stable clock is found.
    Got a crash bluescreen and then after few attemps stable 4.3.
    Then did new benchmark and my score was alot lower and nomatter what i try my 290x did not pass 60fps any more and score dropped to 2100?
    3dmark firestrike gave first time score of 9655 and after my disaster with cpu failed attemp 9200.
    My card is stuck at 60fps max it won't get any higher?
    The GPU-Z also show PCI-E 3.0 X16@ X1 1.1and when i activate render i see it only change to x16 1.1 i never see gen 3.0.
    CPU-Z info motherboard also only pci-express link with x1 and x16
    My questions is how can o solve this problem it seems my motherbaord don't reconize my videocard anymore?
    Also posible that something with the OC attemp CPU broke something?
    I also try change from AUTO to GEN3 in bios but then i get a blackscreen when i boot up it stays backscreen, have to change it switch videocards so i can see my boot again and go into BIOS.
    Im at a lost here, hope one of you know a solution or whats my problem?
    Hope ive supply enough info im bit of a newbie at this sorry for that.
    Thanks in advance.

    >>Clear CMOS<< of your board and retry.

  • After Effects Multi-Core Benchmarks

    I have been doing some testing trying to figure out how fast after effects renders and how to
    help it render faster. So far i have been very dissapointed with the results. no matter how
    much money we spend buying the fastest systems we can i cant seem to get much of a speed
    increase. we have 8 computers with 8 cores each now. but i cant seem to get after effects to
    use the extra cores even when i have 20Gb ram and enable multi frames with 2GB per frame. i see
    it load all the extra copies in task manager but when i render each time 1 core has "some"
    usage and the other 7 are always around 10-15% usage.
    so i wanted to try a simple benchmark that everyone could try and post their results.
    so i made a ntsc dv composition default at 30 seconds and just render it. NOTHING, just blank
    frames of nothing. how fast can afx output data like this? i tried tests with multiple frames
    enabled and disabled and output to tiff files (no compression) or the microsoft DV 48khz
    preset, both with the default BEST setting.
    Now i understand that after effects and premiere have 2 completely different rendering methods
    but still it is worth pointing out that premiere will output 30 seconds of blank video or
    actual real dv video footage to a DV AVI file in about 3-4 seconds. so why is it the same
    machine takes 10 times longer to render from after effects?
    I know in premiere i can simple drop in a dv avi file and export to mpeg2 and i can watch all 8
    cores almost max out as it renders about 6X faster then realtime.
    How can i do something in after effects to see my 8 cores max out?
    Please give any tips or tricks to speed up after effects. We must use vista64 as we have a 30TB fibrechannel array.
    Dell Laptop M6300 - Core 2 Extreme x9000 @2.8ghz (2 cores)
    Adobe CS4 Windows XP 64 bit - 8GB ram
    Multiple OFF     Tiff=1:24
                           DV=1:24
    Multiple ON      Tiff=1:32
                           DV=1:30
    Dell Precision 690 - Dual Quad Core Xeon E5320 @1.86ghz (8 cores)
    Adobe CS4 - Windows Vista 64 bit - 4GB ram - Matrox Axio LE
    Multiple OFF     Tiff= :47
                           DV= :43
    Multiple ON      Tiff= :56
                           DV= :52
    Dell Precision T7400 - Dual Quad Core Xeon X5482 @3.2ghz (8 cores)
    Adobe CS3 - Windows XP 32 bit - 4GB ram - Matrox Axio LE
    Multiple OFF     Tiff= :30
                            DV= :30
    Multiple ON      Tiff= :31
                           DV= :30
    Dell Precision T7400 - Dual Quad Core Xeon X5482 @3.2ghz (8 cores)
    Adobe CS4 - Windows Vista 64 bit - 20GB ram - Matrox Axio LE
    Multiple OFF     Tiff= :30
                           DV= :31
    Multiple ON      Tiff= :35
                          DV= :35

    Well we can toss around reasons for AE not using a processors full potental on a comp, but all I know is that all of the truly multithreaded and multi-processor enable applications I use are much better at using resources to their fullest than AE, or for that mater, most of the programs in the MC.
    When I run those programs my system is pushed to the limit- which is why I bought a quad core system in the first place. Mental ray, Fusion, 3D Coat, Zbrush...the list is long of programs that have no problem using all my cores for 90%-100% of opperations.
    In the end it just adds up to the fact that Adobe owns a large corner of the market- and since there is no competition, sees no reason NOT to be 5-10 years behind the curve when it comes to resource managment in their software.
    Making maters worse is how a lot of the user base is oblivious to the technological changes in processors over the last five years. These people don't know that all but one of their cores sit idle most of the time, and they buy the corp. speak put out by Adobe about "...how complex every thing is- so you don't understand...". Sorry- I may not be a programer or a processor engineer for Intel or AMD, but I know when a program is using resources or not and I know quite a few of the things Adobe has said are "...just too complicated to do..." are really covers for lack luster R and D. Either your programers need to get up to speed, or Adobe needs to actually do the right thing and set more money aside for development. I'm betting it's the later.
    Softimage 7.x is fully multithreaded and 64bit (yes all the way through not just with mr). This is a complicated program- and the development team is probably 1/10th the size of that working on PS. So why after all of these years are we still waiting for even a half baked attempt at such things on the Adobe front?
    The way AE handles RAM compared to programs like Fusion and the like is pathetic.
    Don't get me wrong- I love the program for motion graphics and simple comp work, but again, the resource management with AE feels like I'm back in OS8.
    -Gideon

  • Performance enhanced with recent os/firmware update: Benchmarks

    Hey, Benchmarked my system using xbench before and after the updates in the same conditions and got a 25% improvement! My original score was 54.36, the new one is 67.53. I do not know much about xbench or benchmarking but the user-interface test went from a 23 to a 47 which im guessing is showing they are making the os faster and more able to handle the new intel processors. Has anyone else ran xbench before or after, or just want to post what there getting for there benchmarking scores. The program xbench is free and can be found here www.xbench.com

    They will never issue a rollback to the previous version of the Android OS.

  • How to mesure/benchmark performance of a new database on new server?

    Hi there
    I have two oracle servers with following (same) details:
    RHEL 5.8 64-bit
    Oracle 10gR2 - 10.2.0.5.8
    ASM 10gR2 - 10.2.0.5.8
    Server A: RAM 32GB, 8 CPUs @ 3.00GHz
    Server B: RAM 128GB, vCPUs 16 cores
    Server A (physical server) already has a database A. Server B (on VMWare - yes, my client is moving all Oracle servers to VMware for whatever reason) is a new installation and new database B with exact same init params as databas A. I expdp the data from database A and impdp into database B.
    As per the hardware team, the hardware is better than the old server B. I did a very basic test to check if new DB performs better than that on physical server. Here is the results:
    I ran a simple query to create a new table. The original table (say, table_a) contains 1.7+ million rows and size is 2.2GB.
    create table test1
    as
    select * from table_a;
    It took 3:28mins on database B while it took only 1:55mins on database A. So the new database B seems to be performing poor (apparently). Then I looked at the explain plan (not sure if it means much because it s a very simple query) and here it is from both databases:
    Database A (physical server
    Plan
    SELECT STATEMENT ALL_ROWS
    Cost: 14,052  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    1 TABLE ACCESS FULL TABLE table_a
    Cost: 14,052  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    Database B (virtual server)
    Plan
    SELECT STATEMENT ALL_ROWS
    Cost: 59,844  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    1 TABLE ACCESS FULL TABLE table_a
    Cost: 59,844  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    Questions:
    1. Why is the cost different? Should I "compute statistics" on database B (virtual server)?
    2. How to investigate further and find out reason for the time difference?
    3. What other benchmark test can I run to make sure that I have the right database configuration?
    Not sure if this is enough info - if not, please let me know what else should I provide.
    The team I have to hand-over this server is refusing to accept it by saying that it is slower than the existing one.
    Please advise!
    Best regards

    Wow... I am really thankful for everyone's input - this is really really appreciated!
    I will try what you all have suggested. In the meantime, I did some simple test on both databases and here are the results:
    Create table t1
      (1.7million rows)
    Create index on
      two columns on t1
    Create table t2
      500000 rows
    Create Index on
      two columns on t2
    Delete from t1
      (500000 rows)
    Insert into t1
      500000 rows
    Drop  table t2
    Drop table t1
    Database A
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    1st run
    00:01:55.78
    00:02:12.59
    00:00:03.06
    00:00:01.99
    00:01:25.56
    00:00:10.37
    00:00:00.15
    00:00:05.12
    2nd run
    00:01:56.27
    00:02:11.54
    00:00:02.89
    00:00:01.09
    00:01:18.39
    00:00:10.20
    00:00:00.17
    00:00:04.87
    3rd run
    00:01:56.71
    00:02:12.36
    00:00:03.14
    00:00:01.13
    00:01:22.97
    00:00:10.22
    00:00:00.15
    00:00:04.88
    Database B (VM)
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    1st run
    00:00:25:83
    00:03:54.60
    00:00:00.67
    00:00:01.43
    00:00:29.56
    00:00:09.75
    00:00:00.05
    00:00:01.10
    2nd run
    00:00:24.67
    00:03:05.81
    00:00:00:62
    00:00:01.10
    00:00:31.76
    00:00:08.59
    00:00:00.04
    00:00:00.59
    3rd run
    00:00:44.06
    00:03:12.91
    00:00:00.97
    00:00:01.62
    00:00:39.35
    00:00:08.90
    00:00:00.03
    00:00:00.61
    Now, the database on Server B (VMware) seems to be outperforming that on Server A except for the "Create index on two columns on t1" column.
    Any clues why index creation is consistently taking longer on the database B (on VM) as compared to that on database A (physical server)?
    @jgarry: I am not in a position to try SLOB (no doubt a good tool with lots of reputation) because it requires to create a new DB (which I cannot do on the existing server). I did try "HammerDB" but unfortunately it crashed on each attempt to test the load.

  • Please help explain SAP Benchmark Terminology

    Hello,
    I am looking for help in understanding the terminology of some benchmarks for SAP applications.  I have read and re-read several pages on their website, including http://www.sap.com/solutions/benchmark/index.epx but still are not 100% clear.  I would greatly appreciate if someone familiar with SAP would answer the following questions:
    Q1. SD Standard Benchmark  - is it correct that this is not a single benchmark, but actually a generic term referring to all the SAP benchmarks including benchmarks for: Power, ERP, Supply Chain, Banking, Utilities, NetWeaver, Enterprise Portal, Customer Relationship Management, Product Lifecycle Management, and Retail?  If not, what is it?
    Q2. SAPS - is it correct that this is a unit of measurement and not a benchmark?  It looks like the results of the Sales and Distribution (SD) Benchmark are reported in SAPS.
    Q2b. I have seen cases where people refer to the "SAPS" benchmark.  If SAPS is a unit of measurement, is it safe to say that SAPS is another way of referring to the Sales and Distribution (SD) benchmark?  If yes, is it the 2-tier or 3-tier version?
    Q3. I have also seen cases where people refer to the "SAP" benchmark.  What are they most likely referring to?
    Q4. Please confirm that the following all refer to the same benchmark: SAP-SD Two-Tier, SAP SD 2-tier, SD2 SAPS, and SAP SD2
    Q5. Similarly, please confirm that the following all refer to the same benchmark: SAP-SD Three-Tier, SAP SD 3-tier, SD3 SAPS, and SAP SD3
    Q6. Finally, out of all of the SAP benchmarks, which one benchmark, if any, is most widely used?
    Thanks so much for your help!!!!

    apparently, I have to repackage ALL my classes via the "export to JAR" link in NWDS.  Using winzip is somehow screwing up the JAR.

  • Certification, Customer Performance Benchmarks & Lidar Technical Sessions At Oracle Spatial Summit

    Here is a spotlight on some training sessions that may be of interest, offered at LI/Oracle Spatial Summit in DC, May 19-21.  www.locationintelligence.net/dc/agenda . 
    Preparing for the Oracle Spatial Certification Exam
    Steve Pierce, Think Huddle & Albert Godfrind, Oracle
    Learn valuable strategies and review technical topics with the experts who developed the exam – and achieve your Oracle Spatial Specialist Certification with the most efficient effort. This session will enable you to master difficult topics (such as GeoRaster, 3D/LIDAR support, topology) quickly through clear examples and demos. Sample questions and exam topic breakdown will be covered. Individual certifications can also apply to requirements for organizations seeking Oracle PartnerNetwork Specialized status.
    Offered as both a Monday technical workshop (preregistration required), and Wednesday overview session.
    Content in this session is only available at the Oracle Spatial Summit.
    The performance debate is over: Spatial 12c Performance / Customer Benchmark Track
    Hear the results of customer benchmarks testing the performance of the 12c release of Spatial and Graph – with results up to 300 times faster. In this track, Nick Salem of Neustar and Steve Pierce of Think Huddle will share benchmarks with actual performance results already realized.
    Customers can now address the largest geospatial workloads and see performance increases of 50 to 300 times for common vector analysis operations. With just a small set of configuration changes – and no changes to application code – applications can realize these significant performance improvements. You’ll also learn tips for accelerating performance and running benchmarks with your applications on your systems.
    Effectively Utilize LIDAR Data In Your Business Processes
    Daniel Geringer, Oracle
    Many organizations collect large amounts of LIDAR, or point cloud data for more precise asset management. ROI of the high costs associated with this type of data acquisition is frequently compromised by the underutilization of the data. This session focuses on ways to leverage Oracle Engineered Systems to highly compress and seamlessly store LIDAR data and effectively search it spatially in its compressed form to enhance your business process. Topics covered included loading, compressing, pyramiding, searching and generation of derivative products such as DEMs, TINs, and Contours.
    Many other technical sessions and tracks will cover spatial technologies with depth and breadth.
    Customers including Garmin, Burger King, US Census Bureau, US DOJ, and more will also present use cases using MapViewer & Spatial in location intelligence/BI, transportation, land management and more.
    We invite you to join the community there.  For more information about topics, sessions and experts at Oracle Spatial Summit 2014, visit http://www.locationintelligence.net/dc/agenda .  This training event is held in conjunction with Directions' Location Intelligence - bringing together leaders in the LI ecosystem.
    For a 10% registration discount, become a member of the Spatial SIG, LinkedIn (http://www.linkedin.com/groups/Oracle-Spatial-Graph-1848520?gid=1848520&trk=skills    ) 
    or Google+ Spatial & Graph groups (https://plus.google.com/communities/108078829007193480508 ).  Details posted there.

  • Difference in storage benchmark resuIts: iometer vs. SQLIO

    Hey guys,
    Just wondering if anyone could explain a difference in benchmark numbers between iometer and SQLIO.
    To set this question up: 
    I'm trying to get baseline performance for a new SAN installation, and have been testing with SQLIO for a particular workload: Random 8kB to 32kB file access (10:1 read:write) against a fileshare containing millions of these small files. The numbers were
    looking pretty good, example: 8k-read-random, 8-threads, 8 outstanding requests, hwbuffer, 120sec resulted in 100k IOPS.
    A colleague suggested using iometer to confirm results, but when I configure iometer for same io profile and test params, I get lower figures. So for the example above, I get about 60k IOPS in iometer. 
    The physical disk performance counters match up with during SQLIO and iometer tests, so I am confident that the results produced by each are valid. I'm looking at average request size (to confirm size of io request), disk transfers per second, amongst others.
    I'm just not sure why there is such a disparity... To be specific, I can see 100k disk requests/sec in perfmon, 100k IOPS in the SQLIO results output, and 100k IOPS in the storage vendor's separate reporting tool. I've double checked configuration parameters
    between the two, and played with iometer values (like queue size or worker count) to see if I could coax greater numbers out of it. Haven't been able to yet. 
    Who to believe? Is iometer less capable of pushing IO, or is it the more accurate one? Hard to see SQLIO not being "accurate" since Microsoft's teams use it themselves. 
    Thanks!

    SQLIO initializes files with a NULL byte so if you're testing on a 300GB file, you have a 300GB file with nothing in it. Many storage systems will recognize this and compress/dedupe such patterns, thereby inflating numbers. IoMeter will size a 300GB
    file with randomized data (depending on your configuration). This would be a more realistic baseline since most of our data isn't a bunch of null bytes in a repeated pattern.
    Intel I/O Meter is also using a limited sized pattern so on a heavy synthetic tests it will also fail :( Looks like new wave in a Windows-running tests is DiskSPD. See:
    DiskSPD
    http://blogs.technet.com/b/josebda/archive/2014/10/13/diskspd-powershell-and-storage-performance-measuring-iops-throughput-and-latency-for-both-local-disks-and-smb-file-shares.aspx
    It DOES generate a fully random I/O patterns on request.
    Good luck!
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • A query on a small benchmark test.

    I ran a small code snippet from a book I am reading and was surprised by the results. The code calls an empty method and also a method with a variables assignment as the body. I figured the empty method call should have been quicker but it is consistently the other way round. Here is the code I ran:
    class MethodBenchmark extends Benchmark {
        void benchmark() {
             int x = 2+2+2+2+2+2+2;
        //899543099 as above
        //923825235 empty body
        //925611216 empty body
        //884589242 as above
        public static void main(String[] args) {
            int count = 100000000;
            long time = new MethodBenchmark().repeat(count);
            System.out.println(count + " methods in " +
                               time + " nanoseconds");
    abstract class Benchmark {
        abstract void benchmark();
        public final long repeat(int count) {
            long start = System.nanoTime();
            for (int i = 0; i < count; i++)
                benchmark();
            return (System.nanoTime() - start);
    }What is the reason for this?
    By the way this is not a "Java is broke thread" I am sure there is a logical answer to this.

    I just came across this on Google.
    http://java.sun.com/docs/hotspot/HotSpotFAQ.html#benchmarking_method
    Here is an extract
    Code is generated into memory and executed from there. The way the code is laid out in memory makes a big difference in the way it executes. In this example on my machine, the loop that claims to call the method is better aligned and so runs faster than the loop that's trying to figure out how long it takes to run an empty loop, so I get negative numbers for methodTime-loopTime.
    Which would mean I was running the same code each time, the compiler would have ignored the body because it was dead and I was essential running an empty method body both times, but why the change?
    But it also says this
    The HotSpot compiler is smart enough not to generate code for dead variables.
    In the method above, the local variable is never used, so there's no reason to compute its value. So then the method body is empty again and when the code gets compiled (and inlined, because we removed enough code to make it small enough for inlining) it turns into an empty method again.
    I guess an inlined empty method in memory must be quicker than an empty method from the beginning.
    Message was edited by:
    helloWorld

  • Haswell E benchmarks

    Hello all. It's is once again time for some benchmarks with the new release. I will also include the previous X79 4930K clocked at 4.4GHz and Dual Xeon 10 Core as reference to these. I will add them as I get them done. Please let me know if you have any questions.
    5960X @ 3.5GHz
    64GB DDR4 2400
    1TB Crucial M550 SSD
    780Ti
    AVCHD 4 Layer 30 Minute test
    Adobe CC2014
    3 Layer - 14:35 (Match Source selected)
    3 Layer - 28:57 (HDTV 1080P 23.976)
    4 Layer – 16:01 (Match Source selected)
    4 Layer - 31:37 (HDTV 1080P 23.976)
    Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media
    3 Layer - 2:08
    4 layer - 2:08
    Red 5K 98 Frame to DPX 5K 23.976 slow motion Frame Full Range (max bit depth) 30 seconds of media
    1 Layer - 2:12
    Red 6K to DPX 6K (max bit depth) 20 seconds of media
    1 Layer - 1:31
    Red 4K to H264 4K 30 seconds of media
    4 layer - :50(Match Sequence H264)
    DNG 2.4K to H264 2.4K 26 seconds of media
    1 Layer - :15
    AE CC 2014
    Red 4K to AVI Lossless 4k 30 seconds of media
    1 Layer: 2:19
    5960X @ 4.5GHz
    64GB DDR4 2400
    1TB Crucial M550 SSD
    780Ti
    AVCHD 4 Layer 30 Minute test
    Adobe CC2014
    3 Layer - 11:36 (Match Source selected)
    3 Layer - 22:54 (HDTV 1080P 23.976)
    4 Layer – 12:48 (Match Source selected)
    4 Layer - 24:58 (HDTV 1080P 23.976)
    Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media
    3 Layer - 1:54
    4 layer - 1:58
    Red 5K 98 Frame to DPX 5K 24 Frame slow motion Frame Full Range (max bit depth) 30 seconds of media
    1 Layer - 1:58
    Red 5K 98 Frame to DNxHD 1080 23.978 36 OP1A Frame 30 seconds of media
    1 Layer - :12
    Red 5K 98 Frame to DNxHD 440X 1080P 60 frame OP1A Frame 30 seconds of media
    1 Layer - :14
    Red 6K to DPX 6K (max bit depth) 20 seconds of media
    1 Layer - 1:21
    Red 4K to H264 4K 30 seconds of media
    4 layer - :49(Match Sequence H264)
    DNG 2.4K to H264 2.4K 26 seconds of media
    1 Layer - :13
    AE CC 2014
    Red 4K to AVI Lossless 4k 30 seconds of media
    1 Layer: 1:59
    The playback and export performance currently with CC 2014 is now relatively consistent. The CPU threading was across all 16 threads both on playback and export now. The GPU load consistently pushed up to 90 to 98% when the benchmark tests included GPU accelerated plugins and scaling of multiple layers. The overall efficiency is far better which is why I didnt put notes after each test. The 8 Core clocked at both 3.5GHz and 4.5GHz played back 4K, 5K 98 frame (both 24 and 60 frame playback), and even 6K at full resolution playback without dropping frames. The 5K playback was smooth regardless of slow motion or full motion preview setup. The increased bandwidth and speed of the ram is definitely having an impact there. The ram usage was as high as 30GB in Premiere for the testing but AE went well over 46GB on export. GPU ram usage pushed 2.5GB on the 3GB card with 4K+ media in Premiere but normally used around 1GB for 1080. I also included some request DNxHD OP1A exports from 5K media as a comparison of media timeframe to encoding time for off line. I will be testing the 6 Core 5960K after I do some testing with the ram at stock 2133.                          
    Eric
    ADK

    Reference benchmarks:
    4930K @ 4.4GHz
    64GB DDR3 1600
    1TB Crucial M550 SSD
    780Ti
    AVCHD 4 Layer 30 Minute test
    3 Layer - 16:33 (Match Source selected)
    3 Layer - 25:32 (HDTV 1080P 23.976)
    4 Layer - 28:04 (HDTV 1080P 23.976)
    4 Layer – 18:58 (Match Source selected)
    Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media
    3 Layer - 2:05
    4 layer - 2:06
    Realtime Playback 4K Full Res smooth without dropping frames
    CPU threaded well for export after clearing cache by switching from Hardware MPE to Software MPE and back again before linking to AME. Realtime Playback threaded ideally.
    Average GPU load 35%
    Red 6K to DPX 6K (max bit depth) 20 seconds of media
    1 Layer - 1:43
    Realtime Playback 6K Full Res smooth without dropping frames.
    CPU threaded well for export after clearing cache by switching from Hardware MPE to Software MPE and back again before linking to AME. Realtime Playback threaded ideally.
    Average GPU load 15%
    Red 4K to H264 4K 30 seconds of media
    4 layer - :52 (Match Sequence H264)
    CPU Threads very well on export. GPU load peaking at 99% consistantly.
    DNG 2.4K to H264 2.4K 26 seconds of media
    1 Layer - :21
    CPU threads very well on export.
    Red 4K to Cineform 4K Film Scan 1 30 seconds of media
    4 Layer - 5:44
    CPU Threads poorly
    AE CC 2014
    Red 4K to AVI Lossless 4k 30 seconds of media
    1 Layer: 2:51
    CPU Threads very well on Export. Ram Preview used 45GB of ram 3x at full res
    2x Xeon E5 2690 V2 CPU's @ 3GHz
    128GB DDR3 1600
    1TB Crucial M550 SSD
    780Ti
    AVCHD 4 Layer 30 Minute test
    3 Layer - 29:29
    4 Layer – 31:21
    Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media
    3 Layer - 3:20
    4 layer - 3:20
    Realtime Playback 4K Full Res smooth without dropping frames.
    Poor CPU Threaded on export but not playback
    Average GPU load 50%
    Max GPU load 99%
    Red 6K to DPX 6K (max bit depth) 20 seconds of media
    1 Layer - 2:29
    Realtime Playback 6K Full Res smooth without dropping frames.
    Poor CPU Threaded on export but not playback
    Average GPU load 40%
    Max GPU load 99%
    2x 780Ti GPU's
    AVCHD 4 Layer 30 Minute test
    3 Layer - 29:59
    4 Layer – 32:05
    Red 4K to DPX 4096 x 2048 24p Full Range (max bit depth) 30 seconds of media
    3 Layer - 3:39
    4 layer - 3:30
    Red 6K to DPX 6K (max bit depth) 20 seconds of media
    1 Layer - 2:51
    Red 4K to Pro Res 4444 via Cinemartin PLIN Gold 30 seconds of media
    4 layer - 3:40 + 1 min to 2min Render time in Premiere
    2x Xeon E5 2690 V2 CPU's @ 3GHz
    128GB DDR3 1600
    1TB Crucial M550 SSD
    780Ti
    DNG 2.4K to H264 2.4K 26 seconds of media
    1 Layer - :19
    CPU threads very well on export.
    Red 4K to H264 4K 30 seconds of media
    4 layer - 1:14
    CPU Threads very well on export
    Red 4K to Cineform 4K Film Scan 1 30 seconds of media
    4 Layer - 5:55
    CPU Threads poorly
    AE CC 2014
    Red 4K to AVI Lossless 4k 30 seconds of media
    1 Layer: 2:32
    CPU Threads very well on Export. Ram Preview used 99GB of ram at full res

  • Filesystems benchmarked: EXT3 vs EXT4 vs XFS vs BTRFS

    I wondered across this fine artical this morning, and thought I would share it with the community.
    Quote:
    Let's start from the most obvious: the best balanced filesystem seems to be the mature, almost aging EXT3. This is natural, as it received most cumulative improvements over a long period of time. It has very good sequential and random write speeds and reasonable read speed, factors that are of utmost importance on several different tasks. For example, if you plan to run a database server you are almost forced to use EXT3, as all other filesystems seems to have big problems with synchronized random write speed. Also, you can't go wrong with EXT3 if you use it on your workstation as its performances are quite good in a great amount of different jobs. Finally, EXT3 is more stable than the others FS as most of its bug are by now already worked out.
    However, this not means that EXT3 is the perfect FS: first, it that lacks some important features as delayed allocation and online compression. It lacks native snapshots capability also but you can use LVM to overcome this. It is more fragmentation-prone that EXT4 and XFS and it is very slow in creating/deleting large amount of files, denoting a not-so-good metadata handling. Moreover, it use more CPU cycles than EXT4 and XFS, but with todays CPU I don't think that this is a great problem. If you can live with these minor faults, EXT3 is the right filesystem for you.
    Please don't just read that one paragraph though, they have ten pages worth of detailed and varied benchmarks they used to form that opinion. And the artical is dated from the middle of last month, nice and recent
    Interesting stuff, I thought that ext4 would do better (not that it did poorly, but relative to ext3) And that btrfs wouldnt be as slow as it currently seems, though as the tester commented, it's a very new filesystem. Maybe Arch should ship btrfs as an install option? Help these guys iron out the bugs!

    fukawi2 wrote:
    Misfit138 wrote: Dodge RAM 2500 Cummins Turbodiesel FTW.
    F650 FTW
    Well if you go there, then I have to pull out my Chevy Kodiak Pickup.
    Last edited by Misfit138 (2010-12-04 02:13:59)

  • Why is my FFT benchmark VI not seeing any multithreading gains?

    I am trying to utilize a multi-core CPU to speed up the computation of 32 FFTs by running them in four parallel threads, as shown in the code example image below. However, the performance difference between single and multi-threading is only ca. 10% even on a Core 2 Quad CPU.
    I already tried a few things such as placing the array split and merge functions, or the waveform graph, outside the timed section, but this has very little effect - the main delay still occurs with the FFT VIs. These VIs are already set to reentrant execution, but somehow still don't perform well in parallel. Why?
    Can someone demonstrate a better performance gain in a similar VI? I am using Labview 7.1, using images instead of a VI for replies would be greatly appreciated! 
    Thanks!
    Solved!
    Go to Solution.

    Hello,
    Thanks for your response. Actually, I found the solution. Rather than already doing internal multi-threading, the Express VI did just the opposite, it internally broke the multithreading ability by including several sub-VIs which were not reentrant. That means that the overall Spectral Analysis Express VI is not reentrant either, and will not properly accelerate on a multi-core CPU.
    My solution was to dig down into the Express VI until I found the most basic VI levels (DLL function calls etc.), which actually were fully re-entrant. By extracting these, and saving just this essential code as a new, fully reentrant sub-VI, I was able to unlock the full multi-core potential. My FFT benchmark VI now runs 5x faster, simply by replacing the Express VI with the stripped-down FFT VI of my own.
    As a courtesy, I am attaching my new, 5x faster Multi-Core FFT VI.
    It scales as follows on an Intel Core 2 Quad CPU:
    Labview Spectral Analysis Express VI (single or multiple instances): 1x Speed
    Multi-Core FFT VI (single instance): 2.3x to 2.4x Faster
    Multi-Core FFT VI (dual instance): 3.7x to 4.0x Faster
    Multi-Core FFT VI (quad instance): 4.8x to 6.1x Faster
    Multi-Core FFT VI (octo instance): 4.8x to 6.1x Faster (would probably need an 8-core to see benefit)
    Here are the internals of the stripped-down Multi-Core FFT VI:
    Attachments:
    Multi-Core FFT.vi ‏156 KB

Maybe you are looking for

  • IPod Touch (latest generation) internal sound malfunctioning/not working

    Defective sound on my new (received June 23, 2011) iPod touch 64GB.  The iPod Touch will play when an external speaker or earbuds are attached; however, the sound cannot be heard when the external speakers/earbuds are not attached.  When the iPod is

  • Output of calcuations appearing on different lines

    I have a workbook that I am given few datapoints and have been using calculations to achieve my desired goal. I'm very close but now the resulting data is not appearing as I'd like it. I have 3 main column types: Week #, Totals for each Chemistry Gro

  • My process interrupts when I hibernate while running SSH, what to do?

    Hi! I have a low-end box (512 MB RAM, 1 processor, 15 GB hard disk, 2 TB / month bandwith, Arch Linux) , and I connect to it by SSH. Currently I use it for uploading / downloading stuff, and video encoding. My problem is when I transcode the video by

  • Can we refer C# dll in a plug-in application

    Hi,          I want to refer a C# dll (for some User interface purpose) in a plug-in application.          For that i applied all the necessary settings to the project(like /clr option) and it was complied fine and added plug-in menuitem to acrobat w

  • Display Error in Photoshop CS6 with ATI 6850M

    Hi, First of all, I'm using the german version of Photoshop CS 6. For that reason I don't know how the functions are exactly called in the english Version I've installed Photoshop CS 6 on my notebook. And I got display errors each time if I use selec