SQL Performance and Security

Help needed here please. I am new to this concept and i am working on a tutorial based on SQL performance and security. I have worked my head round this but now i am stuck.
Here is the questions:
1. Analyse possible performance problems, and suggest solutions for each of the following transactions against the database
a) A manager of a project needs to inspect total planned and actual hours spent on a project broken down by activity.
e.g     
Project: xxxxxxxxxxxxxx
Activity Code          planned     actual (to date)
     1          20          25
     2          30          30
     3          40          24
Total               300          200
Note that actual time spent on an activity must be calculated from the WORK UNIT table.
b)On several lists (e.g. list or combo boxes) in the on-line system it is necessary to identify completed, current, or future projects.
2. Security: Justify and implement solutions at the server that meet the following security requirements
(i)Only members of the Corporate Strategy Department (which is an organisation unit) should be able to enter, update and delete data in the project table. All users should be able to read this information.
(ii)Employees should only be able to read information from the project table (excluding the budget) for projects they are assigned to.
(iii)Only the manager of a project should be able to update (insert, update, delete) any non-key information in the project table relating to that project.
Here is the project tables
set echo on
* Changes
* 4.10.00
* manager of employee on a project included in the employee on project table
* activity table now has compound key, based on ID dependence between project
* and activity
drop table org_unit cascade constraints;
drop table project cascade constraints;
drop table employee cascade constraints;
drop table employee_on_project cascade constraints;
drop table employee_on_activity cascade constraints;
drop table activity cascade constraints;
drop table activity_order cascade constraints;
drop table work_unit cascade constraints;
* org_unit
* type - for example in lmu might be FACULTY, or SCHOOL
CREATE TABLE org_unit
ou_id               NUMBER(4)      CONSTRAINT ou_pk PRIMARY KEY,
ou_name          VARCHAR2(40)     CONSTRAINT ou_name_uq UNIQUE
                         CONSTRAINT ou_name_nn NOT NULL,
ou_type          VARCHAR2(30) CONSTRAINT ou_type_nn NOT NULL,
ou_parent_org_id     NUMBER(4)     CONSTRAINT ou_parent_org_unit_fk
                         REFERENCES org_unit
* project
CREATE TABLE project
proj_id          NUMBER(5)     CONSTRAINT project_pk PRIMARY KEY,
proj_name          VARCHAR2(40)     CONSTRAINT proj_name_uq UNIQUE
                         CONSTRAINT proj_name_nn NOT NULL,
proj_budget          NUMBER(8,2)     CONSTRAINT proj_budget_nn NOT NULL,
proj_ou_id          NUMBER(4)     CONSTRAINT proj_ou_fk REFERENCES org_unit,
proj_planned_start_dt     DATE,
proj_planned_finish_dt DATE,
proj_actual_start_dt DATE
* employee
CREATE TABLE employee
emp_id               NUMBER(6)     CONSTRAINT emp_pk PRIMARY KEY,
emp_name          VARCHAR2(40)     CONSTRAINT emp_name_nn NOT NULL,
emp_hiredate          DATE          CONSTRAINT emp_hiredate_nn NOT NULL,
ou_id               NUMBER(4)      CONSTRAINT emp_ou_fk REFERENCES org_unit
* activity
* note each activity is associated with a project
* act_type is the type of the activity, for example ANALYSIS, DESIGN, BUILD,
* USER ACCEPTANCE TESTING ...
* each activity has a people budget , in other words an amount to spend on
* wages
CREATE TABLE activity
act_id               NUMBER(6),
act_proj_id          NUMBER(5)     CONSTRAINT act_proj_fk REFERENCES project
                         CONSTRAINT act_proj_id_nn NOT NULL,
act_name          VARCHAR2(40)     CONSTRAINT act_name_nn NOT NULL,
act_type          VARCHAR2(30)     CONSTRAINT act_type_nn NOT NULL,
act_planned_start_dt     DATE,
act_actual_start_dt      DATE,
act_planned_end_dt     DATE,
act_actual_end_dt     DATE,
act_planned_hours     number(6)     CONSTRAINT act_planned_hours_nn NOT NULL,
act_people_budget     NUMBER(8,2)      CONSTRAINT act_people_budget_nn NOT NULL,
CONSTRAINT act_pk PRIMARY KEY (act_id, act_proj_id)
* employee on project
* when an employee is assigned to a project, an hourly rate is set
* remember that the persons manager depends on the project they are on
* the implication being that the manager needs to be assigned to the project
* before the 'managed'
CREATE TABLE employee_on_project
ep_emp_id          NUMBER(6)     CONSTRAINT ep_emp_fk REFERENCES employee,
ep_proj_id          NUMBER(5)     CONSTRAINT ep_proj_fk REFERENCES project,
ep_hourly_rate      NUMBER(5,2)      CONSTRAINT ep_hourly_rate_nn NOT NULL,
ep_mgr_emp_id          NUMBER(6),
CONSTRAINT ep_pk PRIMARY KEY(ep_emp_id, ep_proj_id),
CONSTRAINT ep_mgr_fk FOREIGN KEY (ep_mgr_emp_id, ep_proj_id) REFERENCES employee_on_project
* employee on activity
* type - for example in lmu might be FACULTY, or SCHOOL
CREATE TABLE employee_on_activity
ea_emp_id          NUMBER(6),
ea_proj_id          NUMBER(5),
ea_act_id          NUMBER(6),      
ea_planned_hours      NUMBER(3)     CONSTRAINT ea_planned_hours_nn NOT NULL,
CONSTRAINT ea_pk PRIMARY KEY(ea_emp_id, ea_proj_id, ea_act_id),
CONSTRAINT ea_act_fk FOREIGN KEY (ea_act_id, ea_proj_id) REFERENCES activity ,
CONSTRAINT ea_ep_fk FOREIGN KEY (ea_emp_id, ea_proj_id) REFERENCES employee_on_project
* activity order
* only need a prior activity. If activity A is followed by activity B then
(B is the prior activity of A)
CREATE TABLE activity_order
ao_act_id          NUMBER(6),      
ao_proj_id          NUMBER(5),
ao_prior_act_id      NUMBER(6),
CONSTRAINT ao_pk PRIMARY KEY (ao_act_id, ao_prior_act_id, ao_proj_id),
CONSTRAINT ao_act_fk FOREIGN KEY (ao_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id),
CONSTRAINT ao_prior_act_fk FOREIGN KEY (ao_prior_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id)
* work unit
* remember that DATE includes time
CREATE TABLE work_unit
wu_emp_id          NUMBER(5),
wu_act_id          NUMBER(6),
wu_proj_id          NUMBER(5),
wu_start_dt          DATE CONSTRAINT wu_start_dt_nn NOT NULL,
wu_end_dt          DATE CONSTRAINT wu_end_dt_nn NOT NULL,
CONSTRAINT wu_pk PRIMARY KEY (wu_emp_id, wu_proj_id, wu_act_id, wu_start_dt),
CONSTRAINT wu_ea_fk FOREIGN KEY (wu_emp_id, wu_proj_id, wu_act_id)
          REFERENCES employee_on_activity( ea_emp_id, ea_proj_id, ea_act_id)
/* enter data */
start ouins
start empins
start projins
start actins
start aoins
start epins
start eains
start wuins
start pmselect
I have the tables containing ouins and the rest. email me on [email protected] if you want to have a look at the tables.

Answer to your 2nd question is easy. Create database roles for the various groups of people who are allowed to access or perform various DML actions.
The assign the various users to these groups. The users will be restricted to what the roles are restricted to.
Look up roles if you are not familiar with it.

Similar Messages

  • Is there a way to view Flash videos on my iMac without downloading Adobe Flash Player? I'm concerned about performance and security with Flash Player.

    Is there a way to view Flash videos on my iMac without downloading Adobe Flash Player? I'm concerned about performance and security with Adobe Flash Player.

    If the video is only available in a format that requires Flash player : then no.
    However, a great many can also be viewed in an HTML5 version, in which case http://hoyois.github.io/safariextensions/clicktoplugin/ or similar can be set up so that Flash never runs unless you specifically choose it to.

  • SQL Performance and Hyper-V Generation 2

    Hi
    Environment - HP DL580 G5 running 2012 R2 Hyper V
    There are only 2 VM's on this box and the underlying storage is provided by Equalogic.
    I have created 2 VM's running 2012 R2 Server with SQL 2012 Enterprise SP2 which are identical apart from one being generation 1 (imported from a 2008 R2 server) and the other being a native generation 2.
    My issues is if I increase the memory to 20gb (runs fine on 10gb) on the gen2 server it will crash under load (dbbc checkdb or any other intensive operation).  Gen1 server has no problems running at 20gb.
    I am suspecting a numa issue but before I go down this route was wondering if anyone else has seen this.  VMQ's have been disabled on both VM's

    Hi,
    Thank you for your post.
    From your description, 
    I see the current issue you are facing is: The native generation 2 VM running on Windows Server 2012 R2 Hyper-V host will crash after increasing memory to 20 GB. Please let me know if I have misunderstood anything.
    You mentioned that both of your VMs are configured with static memory, I suspect that might be the cause. To improve Hyper-V and VM performance, there are corresponding best practices, for
    memory, the best practice is using dynamic memory.
    Although, the Dynamic Memory feature does not help directly in achieving better performance of the virtual machines, but it allows you to balance the allocation of memory resource
    dynamically. It is recommended to configure Dynamic Memory parameters for each critical virtual machine running on a Hyper-V server.
    Since
    dynamic memory adjusts the amount of memory available to a virtual machine, based on changes in memory demand using a memory balloon driver, which helps use memory resources more efficiently.
    You can read this blog for more information.
    Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    Currently, could you please try configuring the VMs with dynamic memory instead of static memory to see if the issue can be solved?
    Please let me know if there is any update. Thanks for your time.
    Best Regards,
    Sophia Sun 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • BPC Sql logic and security

    Hello.
    I have set up my Category dimension to be a R/W dimension. Then I set one of the Category dimension members as "Read Only". When I then try to write to that member from an input schedule I correctly receive an error that I do not have the permissions to do so. But if I run a data package written in BPC sql logic I'm able to write to the mentioned member. The logic contains a XDIM_MEMBERSET statement where this member, among others, is included.
    So is this a bug or have I just misunderstood the way security works?

    We are running version 7 with sp5 (7.0.114) for windows. Here comes the logic (the Category member with read only access is "BID_01":
    // This script takes care of quantity calculation.
    // The quantity is calculated by dividing the cost of each employee with
    // the hour rate of the employee.
    // The script will be called from a data package with project as input parameter.
    // Delete all data of the previous quantity calculation.
    *XDIM_MEMBERSET Account=QTY
    *XDIM_MEMBERSET DataSrc=CALCULATED
    *XDIM_MEMBERSET Category=FINAL,D_CAT,BID_01,BID_02,BID_03,BID_04
    *XDIM_MEMBERSET COUNTERPARTS<>D_VEN
    *XDIM_MEMBERSET RptCurrency=NOK
    *WHEN *
    *IS *
       *REC(FACTOR=0)
    *ENDWHEN
    *COMMIT
    // Load the employee hour rates.
    *LOOKUP COST_CONTROL
          *DIM ACC_HR_RATE:ACCOUNT="HR_RATE"
          *DIM BUS_UNITS="D_CC"
          *DIM CATEGORY="D_CAT"
          *DIM CBS="D_CBS"
          *DIM CURR_INP="I_NOK"
          *DIM DATASRC="DIR_INP"
          *DIM EXP_TYPE="D_EXP"
          *DIM RPTCURRENCY="NOK"
          *DIM PROJECT="D_PRO"
          *DIM COUNTERPARTS=COUNTERPARTS.ID     // Because of a bug this line of code must be applied.
    *ENDLOOKUP
    // Perform quantity calculation.
    *XDIM_MEMBERSET Category=FINAL,D_CAT,BID_01,BID_02,BID_03,BID_04
    *XDIM_MEMBERSET Datasrc=<ALL>
    *XDIM_MEMBERSET ACCOUNT=COST,HR_RATE,D_ACC
    *XDIM_MEMBERSET COUNTERPARTS<>D_VEN
    *XDIM_MEMBERSET RptCurrency=NOK
    *WHEN ACCOUNT
    *IS "COST","D_ACC"
       *REC(FACTOR=1/LOOKUP(ACC_HR_RATE),DATASRC="CALCULATED",ACCOUNT="QTY")
    *ENDWHEN
    *COMMIT

  • Performance and security trade-off

    h1. Scenario
    When I run my code without implementing security its performance decreases from 40% to 70% .
    My goals is to decide: do I really need any trade-off (speed vs security)? in either case I must provide arguments
    Any kind of advice will be appreciated. I just need professional's view about the above scenario
    Thank you

    Hey Aasem,
    As such, there is no thumb-rule for this trade off concept. It all depends on the type of database and its security. It may happen that you have to secure the data more than the performance; e.g. banking, insurance, stock market secotrs. In these sectors, the on-line transactions are required, but the data security is more important than the performance. So, here you may have to cmpromise with the performance.
    But if you take an example of an live score of a cricket match, then the data security is not that much important rather the performance counts more. so, here you can compromise the security of the data with the database performance.
    But, my point is that as a dba, you have to always consider the security of the data more than the performance. And the calculation what you are talking of is not of a fixed manner. It varies as per the requirement and need. And there is no thumb rule for this. It is only your experience which will help you to find out the measurement of everything what you are asking for.
    Thanks and Regards,
    MSORA

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • Performance between SQL Statement and Dynamic SQL

    Select emp_id
    into id_val
    from emp
    where emp_id = 100
    EXECUTE IMMEDIATE
    'Select '|| t_emp_id ||
    'from emp '
    'where emp_id = 100'
    into id_valWill there be more impact in performance while using Dynamic SQL?

    CP wrote:
    Will there be more impact in performance while using Dynamic SQL?All SQLs are parsed and executed as SQL cursors.
    The 2 SQLs (dynamic and static) results in the exact same SQL cursor. So both methods will use an identical cursor. There are therefore no performance differences ito of how fast that SQL cursor will be.
    If an identical SQL cursor is not found (a soft parse), the SQL engine needs to compile the SQL source code supplied, into a SQL cursor (a hard parse).
    Hard parsing burns a lot of CPU cycles. Soft parsing burns less CPU cycles and is therefore better. However, no parsing at all is the best.
    To explain: if the code creates a cursor (e.g. INSERT INTO tab VALUES( :1, :2, :3 ) for inserting data), it can do it as follows:
    while More Data Found loop
      parse INSERT cursor
      bind variables to INSERT cursor
      execute INSERT cursor
      close INSERT cursor
    end loopIf that INSERT cursor does not yet exists, it will be hard parsed and a cursor created. Each subsequent loop iteration will result in a soft parse.
    However, the code will be far more optimal as follows:
    parse INSERT cursor
    while More Data Found loop
      bind variables to INSERT cursor
      execute INSERT cursor
    end loop
    close INSERT cursorWith this approach the cursor is parsed (hard or soft), once only. The cursor handle is then used again and again. And when the application is done inserting data, the cursor handle is released.
    With dynamic SQL in PL/SQL, you cannot really follow the optimal approach - unless you use DBMS_SQL (a complex cursor interface). With static SQL, the PL/SQL's optimiser can kick in and it can optimise its access to the cursors your code create and minimise parsing all together.
    This is however not the only consideration when using dynamic SQL. Dynamic SQL makes coding a lot more complex. The SQL code can now only be checked at execution time and not at development time. There is the issue of creating shareable SQL cursors using bind variables. There is the risk of SQL injection. Etc.
    So dynamic SQL is seldom a good idea. And IMO, the vast majority of people that post problems here relating to dynamic SQL, are using dynamic SQL unnecessary. For no justified and logical reasons. Creating unstable code, insecure code and non-performing code.

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • How to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)

    I have a scenario with the three nodes with server 2012 standard, each running an instance of SQL Server 2012 enterprise, participate in a
    single Windows Server Failover Cluster (WSFC) that spans two data centers.
    If the nodes in the primary data center are unavailable due to data center outage. Then how I can able to access node in the WSFC (Windows Server Failover Cluster) in the secondary disaster recovery data center automatically with some script.
    I want to write script that can be able to check primary data center by pinging some IP after every 5 or 10 minutes.
    If that IP is unable to respond then script can be able to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)
    Can you please guide me for script writing for automatic failover in case of primary datacenter outage?

    please post you question on failover clusters in the cluster forum.  THey will explain how this works and point you at scipts.
    You should also look in the Gallery for cluster management scripts.
    ¯\_(ツ)_/¯

  • How to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster) with scrpiting

    I have a scenario with the three nodes with server 2012 standard, each running an instance of SQL Server 2012 enterprise, participate in a
    single Windows Server Failover Cluster (WSFC) that spans two data centers.
    If the nodes in the primary data center are unavailable due to data center outage. Then how I can able to access node in the WSFC (Windows Server Failover Cluster) in the secondary disaster recovery data center automatically with some script.
    I want to write script that can be able to check primary data center by pinging some IP after every 5 or 10 minutes.
    If that IP is unable to respond then script can be able to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)
    Can you please guide me for script writing for automatic failover in case of primary datacenter outage?

    You are trying to implement manually what should be happening automatically in the cluster. If the primary SQL Server becomes unavailable in the data center, it should fail over to the secondary SQL Server automatically.  Is that not working?
    You also might want to run this configuration by some SQL experts.  I am not a SQL expert, but if you have both hosts in the data center in a cluster, there is no need for replication between those two nodes as they would be accessing
    the database from some form of shared storage.  Then it looks like you are trying to implement Always On to the DR site.  I'm not sure you can mix both types of failover in a single configuration.
    FYI, it would make more sense to establish a file share witness in your DR site instead of placing a third node in the data center for Node Majority quorum.
    . : | : . : | : . tim

  • No of columns in a table and SQL performance

    How does the table size effects sql performance?
    I am comparing 2 tables , with same number of rows(54 million rows) ,
    table1(columns a,b,c,d,e,f..) has 40 columns
    table2 (columns (a,b,c,d)
    SQL uses columns a,b.
    SQL using table2 runs in 1 sec.
    SQL using table1 runs in 30 min.
    Can any one please let me know how the table size , number of columns in table efects the performance of SQL's?
    Thanks
    jeevan.

    user600431 wrote:
    This is a general question. I just want to compare table with more columns and table with less columns with same number of rows .
    I am finding that table with less columns is good in performance , than the table with more columns.
    Assuming there are no row chains , will there be any difference in performance with the number of columns in a table.Jeevan,
    the question is not how many columns your table has, but how large your table segment is. If your query runs a full table scan it has to read through the whole table segment, so in that case the size of the table matters.
    A table having more columns potentially has a larger row size than a table with less columns but this is not a general rule. Think of large columns, e.g. varchar2 columns, think of blank (NULL) columns and you can easily end up with a table consisting of a single column taking up more space per row than a table with 200 columns consisting only of varchar2(1) columns.
    Check the DBA/ALL/USER_SEGMENTS view to determine the size of your two table segments. If you gather statistics on the tables then the dictionary will contain information about the average row size.
    If your query is using indexes then the size of the table won't affect the query performance significantly in many cases.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Performance Problem - MS SQL 2K and PreparedStatement

    Hi all
    I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    // statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 961
    10 1061
    200 1803
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 1171
    10 2754
    100 18817
    200 36443
    The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
    I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
    Best regards
    Edmond

    Edmond,
    Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
    The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
    However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
    As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
    As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
    Alin.

  • SQL/XML performance and xdbcore-parameter

    Hello,
    I try to figure out, what the best method to retrieve large XML-Documents is.
    I compare performance and resources needed for: SQL/XML, DBMS_XMLGEN, SYS_XMLGEN and XSU.
    With a fixed set of 1000 rows I created XML of about 32 MB.
    I am on 10.2.0.4 on Linux
    XML is not schema based.
    SQL*Plus: Release 10.1.0.4.2 - Production on Fri Aug 21 13:35:42 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, Data Mining and Real Application Testing optionsWhat I see, is that SQL/XML is the slowest and needs huge PGA amount (160 Mb and seems to increase linearly, for 5000 rows I've measured ~800 Mb).
    As I've read here, memory issues with SQL/XML should have been solved in 9.2.0.3 (?)
    Is this the case? Or is this PGA amount ok?
    I would like also to test the impact of xdbcore-xobmem-bound and xdbcore-loadableunit-size parameters.
    But I cannot observe any impact. Must I restart the database everytime after change of xdbconfig.xml?
    But also after increasing from 1024 16 to resp. 10240 5120 and restart the same speed and memory consumption are in place.
    Am I doing something wrong?

    SQL*Loader: Release 11.2.0.2.0 - Production on H. Febr. 11 18:57:09 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Control File: prodxml.ctl
    Character Set UTF8 specified for all input.
    Data File: test.xml
    Bad File: test.bad
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 5000
    Bind array: 64 rows, maximum of 256000 bytes
    Continuation: none specified
    Path used: Conventional
    Table PRODUCT_XML, loaded from every logical record.
    Insert option in effect for this table: APPEND
    Column Name Position Len Term Encl Datatype
    COL_ID FIRST 1000 CHARACTER
    (FILLER FIELD)
    IN_FILE DERIVED * EOF CHARACTER
    Static LOBFILE. Filename is bv_test.xml
    Character Set UTF8 specified for all input.
    SQL*Loader-605: Non-data dependent ORACLE error occurred -- load discontinued.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    Table PRODUCT_XML:
    0 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Space allocated for bind array: 256 bytes(64 rows)
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records rejected: 0
    Total logical records discarded: 0
    Run began on H. Febr. 11 18:57:09 2013
    Run ended on H. Febr. 11 19:20:54 2013
    Elapsed time was: 00:23:45.76
    CPU time was: 00:05:05.50
    this is the log
    i have truncated everything i am not able to load 400 mega into 4 giga i cannot understand
    windows is not licensed 32 bit

  • Virtualised Servers and SQL  performance

    Virtualised Servers and SQL  performance
    A client would like to virtualise their Servers as this will enhance their redundancy.
    This will include their SQL Server where SAP B1 is running on.
    I would like to know whether this will be a good idea concerning SQL performance. (it is said that it does have a negative impact.)
    Specs concerning the company:
    1.     They would like to set up a ZEN layer u201CLinuxu201D and then run their virtual machines from it on a SAN format, and use another server as redundancy.
    2.      Their SBO Companyu2019s total to about 30 gig.
    3.     They have about 100 users 30% log on using Citrix.
    Can anyone tell me if this would be a good solution for them, or is it a complete no no concerning performance?
    Thank you

    Hi,
    We have done exactly what your client wants to do and I would strongly recommend to spend a significant amount of time testing performance. AddOns are loading normally (Sachin, you should check if you don't have 2 network cards on your license server).
    However, I am still not convinced that it is the best solution in terms of performance. If I had to do it again, I would probably have 2 physical SQL servers with full redudency instead.
    Vincent

  • How to extract row_id from PL/SQL procedure and assign that to batch script

    Hello Team,
    I am stuck with a requirement, wherein I am running a batch script and within the batch script I am calling a procedure which inserts a record in a table (Including a column named l_id).
    I need this generated l_id to be passed on as a variable to batch script after PL/SQL procedure completion. So that I can refer this same l_id to update the same record in the table again in the same batch script in the later part.
    Looking for some suggestions!!!!
    Thanks
    -Vj.

    789153 wrote:
    I am stuck with a requirement, wherein I am running a batch script and within the batch script I am calling a procedure which inserts a record in a table (Including a column named l_id). Operating system? Scripting language or command shell used?
    I need this generated l_id to be passed on as a variable to batch script after PL/SQL procedure completion. So that I can refer this same l_id to update the same record in the table again in the same batch script in the later part.
    Looking for some suggestions!!!!Don't do this using batch scripts. Batch scripts are very much inferior compared to stored PL/SQL code when it comes to managing databases processes and performing data crunching.
    Why can't this be written entirely in PL/SQL? And execution managed from either DBMS_JOB or DBMS_SCHEDULER ? And before answering that, consider the following:
    - what is the superior language, PL/SQL or shell script?
    - what provides tighter integration with the database, PL/SQL or shell script?
    - what provides proper security and access control, PL/SQL or shell script?
    As you can call SQL*Plus from a shell script to run PL/SQL, you can call a shell script from PL/SQL instead to run external commands and processes.
    Use the right tool for the job. And shell scripting is an excellent tool - but only when correctly used. Are you using it correctly? I strongly doubt that...

Maybe you are looking for