Performance and data werehouse

Hello! 
How enabling data warehouse affects on performance of the SQL server and what recommendations exists about it?
Thank you!

Are you referring to the Management Datawarehouse that comes with the Data collector ?
In that case it does not affect ther server significantly. If you are collecting data from many different servers, I would recommend to place the managementdw database on separate disk and be carefull with the retention period.
Regards
Rasmus Glibstrup
http://blog.sqlguy.dk

Similar Messages

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Problems with Photoshop performance and data transfer speed on iMac

    Two months ago, I started noticing slow performances using Photoshop (above all using clone stamp tool) on my 27" iMac (late 2012). I did the AHT and I found that 8GB of 32GB RAM were broken.
    I removed them but the problem didn't disappered, I also noticed that data transfer speed (both copy and paste from/to internal HD and from CF card/external HD) was really slow.
    I tried many solutions suggested by Apple support, none of them worked out. At the end, I tried uninstalling and re-installing Photoshop: no more problems!!!
    10 days ago, I received a new 8GB RAM module and so I installed it back... suddenly, the problem came back, I tried re-installing again Photoshop but the problem, this time, still persist!
    Does anyone had the same experience? All other CC programs work well (LR, AE, Premiere...)

    yes, it does!
    what seems to be very strange to me is how data trasnfer speed could be affected!
    (just to say, I've already tried reset of SMC and PRAM, I've tried with different accounts and I've also re-installed the OS, next step would be formatting the disk and installing the OS from zero)

  • Cube Performance and Data Explosion

    Hi Experts,
    One of the parterns developed a data warehouse application. And the DW application has some performance issue:
    when the report get the query of the high level dimensions, the performance is okey, and when the query get the very detail data in the cube, the performance gets bad.
    The aggregations and the detail data are all stored in the cube, and the cube data gets Explosion quite quickly since some detailed transaction data need to be queried and stored in the cube too.
    So, experts, do you have any good suggestion on this issue? or if the may be a better design for the cube? e.g. in DW, the cube only stores the aggregations or summary on coarse grained data and measure, for fine grained data , it can be got in ODS.
    another question, I google the architecture solution for the above issue, and someone said that if the DW is designed in a hypercube, there maybe data explosion issue, but instead of desinging the hypercube, multicube should be used, so I wonder if multicube can solve the data explosion issue, and how to solve it. And if the multicube has better performance than the hypercube or can also solve detail data query.
    Last question, do you have any experience on DW implementation on TB level Data, and any good suggestion for architecture design using Oracle OLAP or Essbase for good performance.
    Thanks,
    Royal.
    Edited by: Royal on 2012-11-4 上午4:01

    You have not asked any specific technical question. In my opinion all Oracle Datawarehouses should use Oracle OLAP option for the Aggregation strategy. Significant improvements in 11.2.0.2 (and later versions) have been made. It has become much easier now to create and maintain dimensions/cubes. On the reporting side, OBIEE 11g now understands OLAP metadata. Other reporting tools can use the CUBE_TABLE views.
    Here are some links that you may find useful.
    Comparing MVs and OLAP... Oracle White paper
    http://www.oracle.com/technetwork/database/bi-datawarehousing/comparison-aw-mv-11g-twp-130903.pdf
    Oracle OLAP Support page
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1107593.1
    Three demos done by OLAP Development which explains how OLAP can help in a DW.
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_1/OLAP_Features_and_Use_Cases_1.html
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_2/OLAP_Features_and_Use_Cases_2.html
    http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_3/OLAP_Features_and_Use_Cases_3.html
    Main OLAP page at Oracle OTN site
    http://www.oracle.com/technetwork/database/options/olap/index.html
    Recommended Releases for Oracle OLAP
    http://www.oracle.com/technetwork/database/options/olap/olap-certification-092987.html
    Accelerating Data Warehouses using OLAP option
    http://www.oracle.com/technetwork/issue-archive/2008/08-may/o38olap-085800.html
    What's new in 11.2.0.2 database OLAP option
    http://docs.oracle.com/cd/E11882_01/olap.112/e17123/whatsnew.htm
    Oracle 11.2 OLAP Documentation (scroll down to OLAP section)
    http://www.oracle.com/pls/db112/portal.portal_db?selected=6&frame=#online_analytical_processing_%28olap%29
    Excel reporting from OLAP using Simba tool. This was developed in partnership with Oracle.
    http://www.simba.com/MDX-Provider-for-Oracle-OLAP.htm
    There is a good demo for Simba Excel tool at:
    http://www.simba.com/demos/MDX-Provider-for-Oracle-OLAP-web-demo.html

  • Report performance and data quality

    Hi,
    Can someone help give explanations to following questions :
    1.) Does BW Report show how current is my data?
    2.) What are the reason why the performance of my BW Report is slow?
    3.) What are the reason why my BW Report is have missing data?
    4.) Why is my BW Report have incorrect data?
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Thanks,
    Milind
    Please do not raise generic questions across multiple forums
    Edited by: Arun Varadarajan on Apr 9, 2010 2:08 AM

    Milind,
    1.) Does BW Report show how current is my data?
    You should be able to see the data currency when you run in the web - which method are you using - BEx or Web...?
    2.) What are the reason why the performance of my BW Report is slow?
    It could be due to anything - please search the forums for the same on how to identify possible performance bottlenecks
    3.) What are the reason why my BW Report is have missing data?
    It depends - Missing data loads etc etc
    4.) Why is my BW Report have incorrect data?
    You should be knowing that...? I can just say that it has incorrect data because ...." The sun rises in the east...".!!! more akin to asking "Why did the chicken cross the road"
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    You should ask SAP that question...
    Honestly I am not sure what the reason behind such generic questions are.... if you are looking for answers - then you need to be more specific - if these are more like Interview Questions asked to you - I guess you should be able to answer them or ask further questions to clarify the question further....

  • Hard drive performance and data throughput

    I am using my macbook pro for work primarily and part of that entails creating/restoring images of other macs. I've had the best luck with SuperDuper however the process is still VERY slow. For instance at this moment with no other applications open other than S.D. and firefox the copy speed is under 5MB/s from my MBP to an iMAC via fire wire.
    I am looking for suggestions to increase the performance/IO in the hopes to speed up the process. When purchasing this system the 7200rpm drive was not an option (15") which is unfortunate. I realize that both hard drives in the operation will cause the variable in speed but I want the sending drive as fast as possible.
    My thoughts right now are to purchase a 7200rpm external drive to store backup images and also send from. This would cut out any possible IO on the drive that my mac is performing to run the operating system. Another thought was to upgrade my mac to a 7200rpm drive and use the current 5400rpm drive as the storage for images...in the hopes that it would still provide an increase in restoration speed since it wouldn't be running OSX on it.
    Any thoughts or ideas? Experiences? My MBP has the 5400rpm I believe and 2GB of ram.
    Thanks

    I'll try and explain a bit better. I'm not restoring
    the same image to different types of macs. I create
    images of OTHER macs using my macbook pro to perform
    the process as well as store the backup image.
    Thanks for the clarification. I do that too, but when I do I use my Mac Pro to clone a Mac via Target Disk Mode to an external FireWire 800 drive.
    it helps but its a usb2 enclosure with a somewhat
    older hard drive that is only 30Gb. I am looking at
    purchasing a firewire 800 external drive but I will
    see how this other unit works for now since we
    already have it.
    Part of your throughput problem may be the overhead issues with USB 2. FireWire uses its own chipset so is more independent of the CPU, and FireWire can sustain high-speed transfers at a higher level. USB is CPU-bound and is more vulnerable to CPU demands from other apps or background processes or other USB devices. So even though USB 2 has a higher theoretical peak (480Mbps), FireWire (400Mbps) actually does better in the real world.
    About USB 2 vs. FireWire 400 performance
    I'm not sure if FireWire 800 would help because your slowest drive in the chain may not be fast enough to take advantage.

  • Exchange performance and Data Execution Prevention

    Hi
    Is there any impact on Exchange server performance related to (DEP) Data Execution Prevention ? Should I exclude Exchange processes from DEP ?
    I searched for any blog/forum post that would discuss this subject but had no luck finding any.
    Regard,

    Hi Grzegorz,
    Data Execution Prevention is a Windows feature, not an Exchange server feature.
    DEP is enabled on computers that are running Microsoft Windows Server by default.
    In our lab, we often turn it on. Sometimes it can resulting that messaging performance is slower than expected. Besides, there is no official articles or blogs to explain how the Data Execution Prevention affect Exchange performance.
    Hope my clarification is helpful.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • Performance and data types: which to use?

    Hi All,
    I am wondering what data type to use and the effect of them on memory/speed.
    1. What is the difference (if any) of using sgl, dbl, int etc. Looking at the LabVIEW help there seems to be a range of 8-256 bits of storage according to the data type. Is it basically choose the one with the smallest storage that can fit the data?
    2. I currently have a cluster flowing through subVI's. The cluster contains the start time (or freq), the delta t (or f) and the array of data (about 500-5000 elements). I tried to use the waveform datatype but it couldn't handle a delta t of 2 nanoseconds (500 MHz signal). Am i ok using the cluster, or should i seperate the components and pass them along? What data type should i use for each of the components?
    Thanks

    There are three main issue to consider.
    Range and accuracy. If you need a very high level of accuracy, then you will need to use the extended data type or even create your own, although that's unlikely.
    Memory. Yes, SGL takes less than DBL, but unless you're dealing with really huge amounts of data this won't matter.
    Coercion. Most built in functions work on DBL. If you wire a SGL into them, they will coerce it, possibly creating a copy of the data and increasing your memory usage.
    To sum it up, most of the times it would be best to use the default DBL. It's highly unlikely you'll need one of the others.
    As for your second question, it sounds to me like the data is a single organism, so I would say you should leave it in the cluster, but that really depends on whether the functions need it or not and whether you're constantly bundling and unbundling the cluster. Note that 5000 elements is far from being a large array and you shouldn't have any problems handling it.
    As for the timing unit, if you really only have 5000 elements (that's 10 microseconds of data?) then you should not have a problem with using a U32 with a nanosecond as the base unit. That should give you the ability to measure more than 4 seconds.
    Try to take over the world!

  • I performed a time machine backup without plugging my labtop into a power source. My computer died and all the settings were changed, ie the clock and date were changed back to 2001. So I tried to restore my computer using a previous time machine backup.

    I performed a time machine backup without plugging my labtop into a power source. My computer died and all the settings were changed, ie the clock and date were changed back to 2001. So I tried to restore my computer using a previous time machine backup. (which I now know was wrong). However, when time machine tried to restore it said there was not enough room to do a backup. It seems that it did a half backup because some essential  files such as system profiler are now missing. Can I undo this restore...? What can I do to fix this

    You need to do a full system restore, per Time Machine - Frequently Asked Question #14.
    If that sends a message, please note the exact wording.

  • Issue while perform the Task and Data Audit in Workspace

    Hi All,
    When we try to perform the task and data audit functions in workspace it throws the following error please advise how to fix this issue.
    An error has occurred. Please contact your administrator.
    Show Details:
    Error Reference Number: {110FD8BF-AD33-4257-BA3D-46FF30208EA2};User Name: admin@Native Directory
    Num: 0x80040e31;Type: 1;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: CHsvSystemActivity.cpp;Line: 479;Ver: 11.1.1.2.0.2207;DStr: Timeout expired;
    Num: 0x80004005;Type: 0;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: SystemInfoUsersActivities.cpp;Line: 617;Ver: 11.1.1.2.0.2207;
    Num: 0x80004005;Type: 0;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: CHsvSystemInfo.cpp;Line: 2664;Ver: 11.1.1.2.0.2207;
    Num: 0x80004005;Type: 0;DTime: 6/24/2010 3:42:55 AM;Svr: USNSAP03EX;File: CHFMwSystemInfo.cpp;Line: 1846;Ver: 11.1.1.2.0.2207;
    Thanks in advance,

    If you have any SELECT FOR UPDATE in your code, these will produce locks, and it will not be released until the session which has locked the record has released it. This can be one reason why you have this behavior.
    you need to trail your code if it's explicitly locking a row or more.
    Tony

  • JSF 1.1 performance, especially UIData and Data Table

    Hi,
    Does anybody have any JSF 1.1 (Sun reference implementation) performance experiences to share? I am currently looking at the data table component and the use of UIData. Initial observations are an incredible amount of memory is churned during rendering the data table, with the following classes culprits:
    java.util.HashMap$KeyIterator
    javax.faces.component.UIComponentBase$ChildrenListIterator
    java.util.AbstractList$Itr
    char[]
    java.util.ArrayList
    javax.faces.component.UIComponentBase$FacetsMapKeySetIterator
    javax.faces.component.UIComponentBase$FacetsMapKeySet
    javax.faces.component.UIComponentBase$FacetsMapValues
    javax.faces.component.UIComponentBase$FacetsAndChildrenIterator
    To render 50 rows with 10 columns (each column only having a simple outputText component) I'm seeing 1.3Mb memory churned and 0.8 seconds processing time.
    To rener 100 rows with same columns and components I'm seeing nearly 2Mb churned and 2 seconds processing time.
    UIData.setRowIndex is a large culprit.
    I'm really after finding out your experiences on JSF performance and its scalability.
    Any help here is appreciated.
    Thanks - JJ

    Hi,
    Does anybody have any JSF 1.1 (Sun reference implementation) performance experiences to share? I am currently looking at the data table component and the use of UIData. Initial observations are an incredible amount of memory is churned during rendering the data table, with the following classes culprits:
    java.util.HashMap$KeyIterator
    javax.faces.component.UIComponentBase$ChildrenListIterator
    java.util.AbstractList$Itr
    char[]
    java.util.ArrayList
    javax.faces.component.UIComponentBase$FacetsMapKeySetIterator
    javax.faces.component.UIComponentBase$FacetsMapKeySet
    javax.faces.component.UIComponentBase$FacetsMapValues
    javax.faces.component.UIComponentBase$FacetsAndChildrenIterator
    To render 50 rows with 10 columns (each column only having a simple outputText component) I'm seeing 1.3Mb memory churned and 0.8 seconds processing time.
    To rener 100 rows with same columns and components I'm seeing nearly 2Mb churned and 2 seconds processing time.
    UIData.setRowIndex is a large culprit.
    I'm really after finding out your experiences on JSF performance and its scalability.
    Any help here is appreciated.
    Thanks - JJ

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Creation of a generic extractor and data source for the FAGLFLEXA table

    Hi All,
    Need to create a generic extractor and data source for the FAGLFLEXA table to support AR reporting. This table contains the necessary profit center information to perform LOB reporting against the AR data.
    Please advice on how to do this.
    Regards, Vishal

    Hi Vishal,
    Its seems a simple a work out.
    1.Go to RSO2 & choose the relevant option ie. whether you want to create Transactional DS, Master Data DS or Text DS.
    2. Name it accordingly & then create.
    3. Give description to it & then give table name FAGLFLEXA.
    4. Save it & activate. If you need it to be delta enabled then click over Delta & you can choose accordingly.
    If you still face some problem then do mail me at [email protected]
    Assign points if helpful
    Regards,
    Himanshu

Maybe you are looking for