Iterating performance: ArrayList, LinkedList, and Hashmap for N 1,000,000

I have seen some websites discussing about the performance of ArrayLists, LinkedLists, and Hashmaps:
http://forum.java.sun.com/thread.jspa?threadID=442985
http://java.sun.com/developer/JDCTechTips/2002/tt0910.html
http://www.javaspecialists.co.za/archive/Issue111.html
http://joust.kano.net/weblog/archives/000066.html
If I understand it right an ArrayList in general is faster for accessing a particular element in the collection and I can't find some pro's of using a LinkedList.
My question is: If I only use a large collection with more than 1 million elements for iterating from begin to end (i.e. for loop), is it faster to use a linked list or a custom linked list instead of an arraylist (or hashmap)? Since you can iterate "directly" through a (custom) linked list, which is not possible with a arraylist?
Edited by: 9squared on Nov 23, 2007 1:48 PM

Thanks for the help, I wrote some code and tested it
import java.util.ArrayList;
import java.util.List;
import java.util.LinkedList;
public class TestTemp {
     public static void main(String[] args) {
          List<Node> a = new ArrayList<Node>();
          Node b = new Node("a");
          String[] c = new String[10000000];
          Node temp = b;
          for (int i = 0; i < 10000000; i++)
               a.add(new Node("a"));
               temp.next = new Node("b");
               temp = temp.next;     
               c[i] = "c";
          long tstart;
          tstart = System.currentTimeMillis();
          for (int i = 0; i < 10000000; i++)
               c[i] = "cc";
               if (i%200000 == 0)
                    System.out.println("Array " + i + ": " + (System.currentTimeMillis()-tstart));
          tstart = System.currentTimeMillis();
          temp = b;
          for (int i = 0; i < 10000000; i++)
               temp.next.text = "bb";
               temp = temp.next;
               if (i%200000 == 0)
                    System.out.println("LinkedList " + i + ": " + (System.currentTimeMillis()-tstart));
          tstart = System.currentTimeMillis();
          for (int i = 0; i < 10000000; i++)
               a.get(i).text = "aa";
               if (i%200000 == 0)
                    System.out.println("ArrayList " + i + ": " + (System.currentTimeMillis()-tstart));
public class Node {
     public String text;
     public Node next;
     public Node(String text)
          this.text = text;
}Here are some results in milliseconds, and indeed just iterating doesn't take very long
Elements     Linked     Arraylist     Array
200000     0     0     1
400000     5     13     5
600000     9     22     9
800000     14     32     12
1000000     20     42     16
1200000     25     52     19
1400000     31     63     23
1600000     37     72     26
1800000     42     82     30
2000000     47     92     33
2200000     51     101     37
2400000     56     112     40
2600000     60     123     44
2800000     65     134     47
3000000     69     143     51
3200000     73     152     55
3400000     78     162     59
3600000     84     175     63
3800000     103     185     67
4000000     108     195     70
4200000     113     207     74
4400000     117     216     78
4600000     122     225     81
4800000     127     237     85
5000000     131     247     88
5200000     136     256     92
5400000     142     266     97
5600000     147     275     101
5800000     153     286     107
6000000     159     298     113
6200000     162     307     117
6400000     167     317     121
6600000     171     326     125
6800000     175     335     128
7000000     180     346     132
7200000     184     358     136
7400000     188     368     139
7600000     193     377     143
7800000     197     388     147
8000000     201     397     150
8200000     207     410     154
8400000     212     423     157
8600000     217     432     162
8800000     222     442     167
9000000     227     452     171
9200000     231     462     175
9400000     236     473     178
9600000     242     483     182
9800000     249     495     185
10000000     257     505     189

Similar Messages

  • Known performance issue bugs and patches for R12.1.3

    Hi Team,
    We have upgraded oracle application from 12.1.1 to 12.1.3.
    I wanted to apply the known performance issue bugs and patches for R12.1.3.
    Please let me know for any details.
    Thanks,

    Are u currently facing any performance issues on 1213?
    Tuning All Layers Of E-Business Suite – Performance Topics
    http://www.oracle.com/technetwork/apps-tech/collab2011-tuning-ebusiness-421966.pdf
    • Start with Best Practices : (note: 1121043.1)
    • SQL Tuning
    – Trace files
    – SQLT output (note: 215187.1)
    – Trace Analyzer (note: 224270.1)
    – AWR Report (note: 748642.1)
    – AWR SQL Report (awrsqrpt.sql)
    – 11g SQL Monitoring
    – SQL Tuning Advisor
    • PL/SQL Tuning
    – Product logs
    – PL/SQL Profiler (note: 808005.1)
    • Reports Tracing
    – note: 111311.1
    • Database Tuning
    – AWR Report (note: 748642.1)
    – ADDM report (note: 250655.1)
    – Automated Session History (ASH) Report
    – LTOM output (note: 352363.1)
    • Forms Tuning
    • Forms Tracing (note: 373548.1)
    • FRD Log (note: 445166.1)
    – Generic note: 438652.1
    • Middletier Tuning
    – JVM Logs
    – JVM Sizing (note: 362851.1)
    – JDBC Tuning (note: 278868.1)
    • OS
    – OSWatcher (note: 301137.1)

  • Performance of JSF and HashMap

    Using Java profiler I've noticed that with large number of components on the page time of HashMap.put calls plays significant role.
    Is there a way to control initial capacity of hash maps in Sun RI?
    Does Sun RI tries to set initial capacity and load factor in the code?
    Per Java 5 API docs:
    An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the capacity is roughly doubled by calling the rehash method.
    It would be nice if JSF RI could guess more appropriate capacity and load factor to minimize performance impact on extensive HashMap.put() calls.
    Thanks

    This is an example of what I was talking about in the other thread. I was saying that if you use an unusual page as the basis for your performance tests, you will end up optimizing for the extreme case at the expense of the normal case. If changes are made in this area care must be taken not to cater to the extreme at the expense of the usual.
    If the implementation starts with a higher capacity in the map initially, that means more memory is being used. If the capacity is optimized for pages with large numbers of components, the memory is wasted.

  • How to Manually Perform software Updates and tips for making Macbook better

    Hi. So recently my internet was really slow. I figured the problem out and now is working well. Although the youtube site does not allow me to view entire videos.
    Well...i was told that since my macbook is turned off at night, it cant perform software updates between 3am-6am...so how can i do these manually? or how can i change the update time to sometime in the evening?
    I heard these software updates are suppose to make the macbook run and perform better. what other things can i do to accomplish that?
    (Laymans terms, im new to the mac world)

    Select Software Update from the Apple menu. Or download the standalone updaters from Apple's download site - http://www.apple.com/support/downloads/. Or simply set the Check for Updates option in Software Update to whatever frequency you prefer. SU will notify you when updates are available.
    Why reward points?(Quoted from Discussions Terms of Use.)
    The reward system helps to increase community participation. When a community member gives you (or another member) a reward for providing helpful advice or a solution to their question, your accumulated points will increase your status level within the community.
    Members may reward you with 5 points if they deem that your reply is helpful and 10 points if you post a solution to their issue. Likewise, when you mark a reply as Helpful or Solved in your own created topic, you will be awarding the respondent with the same point values.

  • JAXB 2.0, XMLAdaptor and HashMap for customized mapping

    I have a requirement to implement a custom mapping of HashMap in JAXB. Basically I want to be able to use a HashMap structure to represent key/value pairs instead of the default List. So I need to be able to store (marshal) a HashMap structure into XML, and then be able to read (unmarshal) the XML back into a HashMap.
    I've discovered the XMLAdaptor class and I think this is the way to do it:
    https://jaxb.dev.java.net/nonav/jaxb20-pr/api/javax/xml/bind/annotation/adapters/XmlAdapter.html
    However, my knowledge of generics and annotations in 5.0, and XML schemas are somewhat elementary - so I'm a bit stuck.
    If someone has a complete working example of the HashMap example they show in the API doc linked above, I would greatly appreciate it if you could post it here.
    Otherwise, perhaps you could answer a couple of questions for me.
    1) I tried using the XSD directly from the example in the API doc (Step 2).
         <xs:complexType name="myHashMapType">
           <xs:sequence>
             <xs:element name="entry" type="myHashMapEntryType"
                            maxOccurs="unbounded"/>
           </xs:sequence>
         </xs:complexType>
         <xs:complexType name="myHashMapEntryType">
           <xs:simpleContent>
             <xs:extension base="xs:string"/>
           </xs:simpleContent>
           <xs:attribute name="key" type="xs:int"/>
         </xs:complexType>xjc complains when I generate the classes and says that the 'xs:attribute name=key' shouldn't be there. If I move it up into the 'xs:extension' node it works. But then I get strange XML when I create some key/value pairs and marshal them. The first XML node looks okay, but the rest of the List items start with '<xsi:entry ...', e.g.
    <entry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="myHashMapEntryType" key="key0">value0</entry>
    <xsi:entry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="myHashMapEntryType" key="key1">value1</xsi:entry>
    ...So my first question is, what is the proper XSD to represent a HashMap with XML like the following:
         <hashmap>
             <entry key="id123">this is a value</entry>
             <entry key="id312">this is another value</entry>
          </hashmap>2) Now for the HashMap part of it; you need to create a class that extends XmlAdaptor<HashMap, MyHashMapType> with marshal and unmarshal methods. This is where I get a little fuzzy on exactly what these methods need to do. The example in the API doc doesn't show this.
    3) And finally; once I have all this in place, how do I actually marshal and unmarshal, presumably using the value types that I created instead of the value types that were auto generated by xjc? At least I think that's what I'm supposed to do.
    Thanks.

    Download the javaeetutorial5
    Under this path is a working example.
    \examples\jaxb\j2s-xmlAdapter-field
    I've tried following the bad example you sited, and even with a great deal of work it still fails to work.
    I think the example will help clarify how marshall and unmarshall methods are to be implemented. What would be really nice is for whoever wrote the example you sited to finish the job instead of expecting the reader of their documentation to figure it out.
    I fail to understand why JaxB has failed to directly support a collection so prevalent in its use as a HashMap. I also don�t know why they can�t support a Map interface that by default maps to a HashMap.
    Best of luck to you,
    Robert

  • LinkedList and ArrayList  Iterator speeds

    Is an ArrayList Iterator hugely faster than a LinkedList Iterator?
    I have swapped, for storing my 8000 dataObjects, from using an ArrayList to a LinkedList and all seems fine except that the Iterator I get is much slower (by more than 10 times slower)
    I have scoured my code but I have come to the conclusion that a LinkedList is just slow
    Id just like one of you to comfirm my beliefs

    How odd.
    According to Bruce Eckel's Thinking in Java [p.505]
    (downloadable for free at
    http://www.mindview.net/Books/TIJ/ ), LinkedList
    should be faster than ArrayList for iteration,
    insertion, and removal; and only slower for getting by
    index.
    I haven't done any tests of my own on it, but it makes
    sense. I don't know why your LinkedList iterators
    would be slower.Yes, LinkedList will be faster at inserts/deletes and slower at indexed lookups. Keep in mind that when you say myLinkedList.get(7000) the iterator has to go through 6999 objects to retrieve the 7000th object. When you do the same thing on an ArrayList it just goes directly to that index and returns it.

  • Using a hashmap for caching -- performance problems

    Hello!
    1) DESCRIPTION OF MY PROBLEM:
    I was planing to speed up computations in my algorithm by using a caching-mechanism based on a Java HashMap. But it does not work. In fact the performance decreased instead.
    My task is to compute conditional probabilities P(result | event), given an event (e.g. "rainy day") and a result of a measurement (e.g. "degree of humidity").
    2) SITUATION AS AN EXCERPT OF MY CODE:
    Here is an abstraction of my code:
    ====================================================================================================================================
    int numOfevents = 343;
    // initialize cache for precomputed probabilities
    HashMap<String,Map<String,Double>> precomputedProbabilities = new HashMap<String,Map<String,Double>>(numOfEvents);
    // Given a combination of an event and a result, test if the conditional probability has already been computed
    if (this.precomputedProbability.containsKey(eventID)) {
    if (this.precomputedProbability.get(eventID).containsKey(result)) {
    return this.precomputedProbability.get(eventID).get(result);
    } else {
    // initialize a new hashmap to maintain the mappings
    Map<String,Double> resultProbs4event = new HashMap<String,Double>();
    precomputedProbability.put(eventID,resultProbs4event);
    // unless we could use the above short-cut via the cache, we have to really compute the conditional probability for the specific combination of the event and result
    * make the necessary computations to compute the variable "condProb"
    // store in map
    precomputedProbabilities.get(eventID).put(result, condProb);
    ====================================================================================================================================
    3) FINAL COMMENTS
    After introducing this cache-mechanism I encountered a severe decrease in performance of my algorithm. In total there are over 94 millions of combinations for which the conditional probabilities have to be computed. But there is a lot of redundancy in this set of feasible combinations. Basically it can be brought down to just about 260.000 different combinations that have to be captured in the caching structure. Therefore I expected a significant increase of the performance.
    What do I do wrong? Or is the overhead of a nested HashMap so severe? The computation of the conditional probabilities only contains basic operations.
    Only for those who are interested in more details
    4) DEEPER CONSIDERATION OF THE PROCEDURE
    Each defined event stores a list of associated results. These results lists include 7 items on average. To actually compute the conditional probability for a combination of an event and a result, I have to run through the results list of this event and perform an Java "equals"-operation for each list item to compute the relative frequency of the result item at hand. So, without using the caching, I would estimate to perform on average:
    7 "equal"-operations (--> to compute the number of occurences of this result item in a list of 7 items on average)
    plus
    1 double fractions (--> to compute a relative frequency)
    for 94 million combinations.
    Considering the computation for one combination (event, result) this would mean to compare the overhead of the look-up operations in the nested HashMap with the computational cost of performing 7 "equal' operations + one double fration operation.
    I would have expected that it should be less expensive to perform the lookups.
    Best regards!
    Edited by: Coding_But_Still_Alive on Sep 10, 2008 7:01 AM

    Hello!
    Thank you very much! I have performed several optimization steps. But still caching is slower than without caching. This may be due to the fact, that the eventID and results all share long common prefixes. I am not sure how this affects the computation of the values of the hash method.
    * Attention: result and eventID are given as input of the method
    Map<String,Map<String,Double>> precomputedProbs = new HashMap<String,Map<String,Double>>(1200);
    HashMap<String,Double> results2probs = (HashMap<String,Double>)this.precomputedProbs.get(eventID);
    if (results2Probs != null) {
           Double prob = results2Probs.get(result);
           if (prob != null) {
                 return prob;
    } else {
           // so far there are no conditional probs for the annotated results of this event
           // initialize a new hashmap to maintain the mappings
           results2Probs = new HashMap<String,Double>(2000);
           precomputedProbs.put(eventID,results2Probs);
    * Later, in case of the computation of the conditional probability... use the initialized map to save one "get"-operation on "precomputedProbs"
    // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
    results2probs.put(result, condProb);And... because it was asked for, here is the computation of the conditional probability in detail:
    * Attention: result and eventID are given as input of the method
    // the computed conditional probabaility
    double condProb = -1.0;
    ArrayList resultsList = (ArrayList<String>)this.eventID2resultsList.get(eventID);
    if (resultsList != null) {
                // listSize is expected to be about 7 on average
                int listSize = resultsList.size(); 
                // sanity check
                if (listSize > 0) {
                    // check if given result is defined in the list of defined results
                    if (this.definedResults.containsKey(result)) { 
                        // store conditional prob. for the specific event/result combination
                        // Analyze the list for matching results
                        for (int i = 0; i < listSize; i++) {
                            if (result.equals(resultsList.get(i))) {
                                occurrence_count++;
                        if (occurrence_count == 0) {
                            condProb = 0.0;
                        } else {
                            condProb = ((double)occurrence_count) / ((double)listSize);
                        // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
                        results2probs.put(result, condProb);
                        return condProb;
                    } else {
                        // mark that result is not part of the list of defined results
                        return -1.0;
                } else {
                    throw new NullPointerException("Unexpected happening. Event " + eventID + " contains no result definitions.");
            } else {
                throw new IllegalArgumentException("unknown event ID:"+ eventID);
            }I have performed tests on a decreased data input set. I processed only 100.000 result instances, instead of about 250K. This means there are 343 * 100K = 34.300K = 34M calls of my method per each iteration of the algorithm. I performed 20 iterations. With caching it took 8 min 5 sec, without only 7 min 5 sec.
    I also tried to profile the lookup-operations for the HashMaps, but they took less than a ms. The same as with the operations for computing the conditional probability. So regarding this, there was no additional insight from comparing the times on ms level.
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:22 AM
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:24 AM

  • Difference between ArrayList class and LinkedList class

    I would like to ask what is the difference between the ArrayList class and the LinkedList class? If I use ArrayList class can I have a fixed size?
    Thank you for your answers.

    I often use LinkedList when I need a FIFO. When all you're doing is adding to the end of a list and removing the first element, it is definitely faster. Here's a quick SSCCE I wrote up to demonstrate:import java.util.*;
    public class ListTest {
       public static void main(String[] args) {
          LinkedList<Integer> linked = new LinkedList<Integer>();
          ArrayList<Integer> array = new ArrayList<Integer>();
          Integer[] values = new Integer[2500000];
          for (int i = 0; i < values.length; i++) {
             values[i] = new Integer(i);
          Random rand = new Random();
          boolean[] randDir = new boolean[values.length];
          for (int i = 0; i < randDir.length; i++) {
             randDir[i] = rand.nextBoolean();
          long startTime, endTime;
          startTime = System.currentTimeMillis();
          for (int i = 0; i < values.length; i++) {
             if (linked.size() > 0 && randDir) {
    linked.removeFirst();
    else {
    linked.addLast(values[i]);
    endTime = System.currentTimeMillis();
    linked.clear();
    System.out.println("linked:"+(endTime-startTime));
    startTime = System.currentTimeMillis();
    for (int i = 0; i < values.length; i++) {
    if (array.size() > 0 && randDir[i]) {
    array.remove(0);
    else {
    array.add(values[i]);
    endTime = System.currentTimeMillis();
    System.out.println("array:"+(endTime-startTime));
    array.clear();

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • Performance on Select Single&Write  AND Select*(For All Entries)&Read&Write

    Hi Experts,
    I got a code review problem & we are in a argument.
    I need the best performance code out of this two codes. I have tested this both on 5 & 1000 & 3000 & 100,000 & 180,000 records.
    But still, I just need a second opinion of experts.
    TYPES : BEGIN OF ty_account,
            saknr   TYPE   skat-saknr,
            END OF ty_account.
    DATA : g_txt50      TYPE skat-txt50.
    DATA : g_it_skat    TYPE TABLE OF skat,       g_wa_skat    LIKE LINE OF g_it_skat.
    DATA : g_it_account TYPE TABLE OF ty_account, g_wa_account LIKE LINE OF g_it_account.
    Code 1.
    SELECT saknr INTO TABLE g_it_account FROM skat.
    LOOP AT g_it_account INTO g_wa_account.
      SELECT SINGLE txt50 INTO g_txt50 FROM skat
        WHERE spras = 'E'
          AND ktopl = 'XXXX'
          AND saknr = g_wa_account-saknr.
      WRITE :/ g_wa_account-saknr, g_txt50.
      CLEAR : g_wa_account, g_txt50.
    ENDLOOP.
    Code 2.
    SELECT saknr INTO TABLE g_it_account FROM skat.
    SELECT * INTO TABLE g_it_skat FROM skat
      FOR ALL ENTRIES IN g_it_account
          WHERE spras = 'E'
            AND ktopl = 'XXXX'
            AND saknr = g_it_account-saknr.
    LOOP AT g_it_account INTO g_wa_account.
      READ TABLE g_it_skat INTO g_wa_skat WITH KEY saknr = g_wa_account-saknr.
      WRITE :/ g_wa_account-saknr, g_wa_skat-txt50.
      CLEAR : g_wa_account, g_wa_skat.
    ENDLOOP.
    Thanks & Regards,
    Dileep .C

    Hi Dilip.
    from you both the code I have found that you are selecting 2 diffrent fields.
    In Code 1.
    you are selecting SAKNR and then for these SAKNR you are selecting TXT50 from the same table.
    and in Code 2 you are selecting all the fields from SAKT table for all the values of SAKNR.
    I don't know whats your requirement.
    Better you declare a select option on screen and then fetch required fields from SAKT table for the values entered on screen for SAKNR.
    you only need TXT50 and SAKNR fields.
    so declare two types one for SAKNR and another for TXT50.
    Points to be remember.
    1. while using for all entries always check the for all entries table should not be blank.
    2. you will have to fetch all the key fields in table while applying for all entries,
        you can compare key fields with a constant which is greater than initial value.
    3. while reading the table sort the table by the field on which you are going to read it.
    try this:
    TYPES : BEGIN OF ty_account,
    saknr TYPE skat-saknr,
    END OF ty_account.
    TYPES : begin of T_txt50,
          saknr type saknr,
          txt50 type txt50,
    end of t_txt50.
    DATA: i_account type table of t_account,
          w_account type t_account,
          i_txt50 type table t_txt50,
          w_txt50 type t_txt50.
    select SAKNR from SKAT into table i_account.
    if sy-subrc = 0.
    sort i_account by saknr.
    select saknr txt50 from SKAT into table i_txt50
    for all entries in i_account
    where SAKNR = i_account-SAKNR
    here mention al the primary keys and compare them with their constants.
    endif.     
    Note; here you need to take care that, you will have to fetch all the key fields in table i_txt50.
    and compare those fields with there constants which should be greater than initial values.
    they should be in proper sequence.
    now for writing.
    loop at i_account into w_account.
    clear w_txt50.
    sort i_txt50 by saknr.
    read table i_txt50 into w_txt50 with key SAKNR = w_account-saknr
    if sy-subrc = 0.
    write: w_txt50-saknr, w-txt50-txt50.
    clear w_txt50, w_account.
    endif.
    endloop.
    Hope it wil clear your doubts.
    Thanks
    Lalit

  • Does anyone know how well the Intel Iris Pro installed on new 15" MacBook Pros performs using Photoshop and Lightroom. I have seen some differing opinions out there, and I would rather not shell out the extra cash for the Nvidia if I don't have to. I most

    Does anyone know how well the Intel Iris Pro installed on new 15" MacBook Pros performs using Photoshop and Lightroom?  I have seen some differing opinions out there, and I would rather not shell out the extra cash for the Nvidia if I don't have to. I mostly do photo editing for business and personal use. I have not used the 3D function in Photoshop, but I would like to know that I could.

    You could download a trial and see how well it works before committing to a subscription. You get 30 days to decide.
    Photo editor | Download free Adobe Photoshop CC trial
    Photo editor app | Download free Adobe Photoshop Lightroom 5 trial
    Gene

  • HT201250 Is there a way to change the intervals at which Time Machine performs backups (e.g., weekly instead of hourly or daily) past 24 hours, daily for the past month, and weekly for everything older...

    Is there a way to change the intervals at which Time Machine performs backups (e.g., weekly instead of hourly or daily) past 24 hours, daily for the past month, and weekly for everything older...

    You can edit the interval in Console or install a Pref Pane
    called TimeMachineScheduler (Leopard or higher) free from:
    http://www.klieme.com/TimeMachineScheduler.html
    Good luck, Tom

  • A process for the performance monitoring, tuning and fixing issues

    Hello
    Any recommendations for 10g a process/procedure/methodology for the performance monitoring, tuning and fixing issues for a team to follow ?

    Ranker wrote:
    Hello
    Any recommendations for 10g a process/procedure/methodology for the performance monitoring, tuning and fixing issues for a team to follow ?1) upgrade the DB to a supported version.
    2) Read The Fine Manual; Performance Tuning Guide
    http://docs.oracle.com/cd/E11882_01/server.112/e10822/toc.htm
    Handle:     Ranker
    Status Level:     Newbie
    Registered:     May 12, 2013
    Total Posts:     13
    Total Questions:     4 (4 unresolved)
    How sad!
    why do you never get your questions answered here?

  • I received a Caution message - your computer contains a variety of suspicious programs.  Your system requires immediate checking! The system will perform a fast and free check of your PC for malicious programs.  Check OK

    I received this message this morning in Safari - Caution! Your computer contains a variety of suspicious programs.  Your system requires immediate checking! The system will perform a fast and free check of your PC for malicious programs.  Check OK

    So what did you do?
    If you fell for the scareware, you probably now have malware installed on your Mac.
    Allan

  • LV PDA 8 wait for ms and wait for next N ms multiple costs extrem performance.

    Dear LV PDA users.
    There is another bug using the PDA toolkit.
    The usage of wait for ms and wait for next N ms multiple costs me 200ms while the input is 1ms.
    I have tried it without the wait functions and the looptiming was 0..1ms.
    One possible workaround is to use timout events with while loops.
    This works fine without delays.
    With kind regards
    Martin Kunze
    KDI Digital Instrumentation.com
    e-mail: [email protected]
    Tel: +49 (0)441 9490852

    This issue has already been reported to the R&D group at NI (at least that is what I'm told) ...  refer to this thread ... Here

Maybe you are looking for

  • How to add the User in customized Group.

    Hi All,      If we create any User in SAP Portal then that user is assigned to everyone group by default.Now i have one custom group,say 'ABC'.what i want is, which ever user is being created through SAP Portal should be added to this group. one way

  • Business Connector not getting start

    Hi All, I am new to Business connector, i am trying to install BC 4.7, bt in the processing it shows error msg. After this i am not able to start bc from command prompt. I typed CD\<sapbc47>\developer\bin/integrator.bat Bui it shows error msg "system

  • Learning to use SPP

    Hi all, I have been working with and delivering training on SAP for 8 years now (out of 30+ years total as a trainer, training manager and consultant) but despite having used a variety of other tools and methods have never worked for a client with th

  • How to calculate Good will in HFM?

    Hi Gurus, I would like to know how would we calculate Good will % of a company in HFM. please answer with simple and easy example. If there are any links for this suggest me. Please reply me ASAP Thanks Sarath

  • Standard price in Material master

    Hello, in the Table MBEW und Field STPRS ther is the actual standard preis. We actualise (change) the preis every Friday. It means we have another price on saturday How can I see the Preis from wednesday on saturday. It's there any table for change h