Flexible sql

Hi All,
I have a table which I am using in a lookup in informatica.
select out1, out2, out3
from rule_table
where in1 = '100'
in2 = '1'
in3 = '155';
OK the above will return values if we have a direct hit.
e.g. rule table
inputs outputs
in1     in2     in3          out1     out2     out3     out4     out5
100     1     null      IE22     371003     970                              
100     1     +++     IE22     700101          X     IE2000                    
100     1     155     IE92     700101          X                         
However if I go in with 100,1,98 then it should pick up the row 2 as '+' is a placeholder for all in3 values that are not null or defined.
How do I do this in sql?

Sorry Guys for being vague,
CREATE TABLE FHMR_MAPPING_RULE
FHMR_ID bigint NOT NULL PRIMARY KEY,
FHMR_FK_FHIF_ID bigint NOT NULL,
FHMR_RULE_NO int NOT NULL, -- each interface has so many rules defined by rule number .. so mapplet goes in using rule_no
FHMR_INPUT_VALUE1 varchar(255),
FHMR_INPUT_VALUE2 varchar(255),
FHMR_INPUT_VALUE3 varchar(255),
FHMR_INPUT_VALUE4 varchar(255),
FHMR_INPUT_VALUE5 varchar(255),
FHMR_INPUT_VALUE6 varchar(255),
FHMR_INPUT_VALUE7 varchar(255),
FHMR_INPUT_VALUE8 varchar(255),
FHMR_INPUT_VALUE9 varchar(255),
FHMR_INPUT_VALUE10 varchar(255),
FHMR_OUTPUT_VALUE1 varchar(255),
FHMR_OUTPUT_VALUE2 varchar(255),
FHMR_OUTPUT_VALUE3 varchar(255),
FHMR_OUTPUT_VALUE4 varchar(255),
FHMR_OUTPUT_VALUE5 varchar(255),
FHMR_OUTPUT_VALUE6 varchar(255),
FHMR_OUTPUT_VALUE7 varchar(255),
FHMR_OUTPUT_VALUE8 varchar(255),
FHMR_OUTPUT_VALUE9 varchar(255),
FHMR_OUTPUT_VALUE10 varchar(255),
FHMR_VALID_FROM date NOT NULL,
FHMR_VALID_TO date NOT NULL,
FHMR_VERSION integer NOT NULL,
FHMR_ACTIVE char(1) NOT NULL,
FHMR_CREATE_DATE date NOT NULL,
FHMR_CREATE_USER varchar(20) NOT NULL,
FHMR_UPDATE_DATE date NOT NULL,
FHMR_UPDATE_USER varchar(20) NOT NULL
insert into cdhud.fhmr_mapping_rule values (3,1,1,
'100','1',null,null,null,null,null,null,null,null,
'IE22','3710003','970',null,null,null,null,'X',null,null,
date('2010-01-01'),
date('9999-12-31'),
1,
'A',
current date,
'xxx',
current date,
'xxx');
insert into cdhud.fhmr_mapping_rule values (4,1,1,
'100','1','+++',null,null,null,null,null,null,null,
'IE22','700101',null,'X','IE2000',null,null,'X',null,null,
date('2010-01-01'),
date('9999-12-31'),
1,
'A',
current date,
'xxx',
current date,
'xxx');
insert into cdhud.fhmr_mapping_rule values (5,1,1,
'100','1','155',null,null,null,null,null,null,null,
'IE922','700101',null,'X',null,null,null,'X',null,null,
date('2010-01-01'),
date('9999-12-31'),
1,
'A',
current date,
'xxx',
current date,
'xxx');
Please change current date (db2) for sysdate.
Then
select * from (select fhmr_id,fhmr_input_value1, fhmr_input_value2, fhmr_input_value3,fhmr_output_value1, fhmr_output_value2, fhmr_output_value3,
rank() over (order by case when fhmr_input_value3 = '+++' then 1 else 0 end desc) as rn
from cdhud.fhmr_mapping_rule
where fhmr_input_value1 = '100'
and fhmr_input_value2 = '1'
and fhmr_input_value3 = '155';
The above gives us 1 row.
select * from (select fhmr_id,fhmr_input_value1, fhmr_input_value2, fhmr_input_value3,fhmr_output_value1, fhmr_output_value2, fhmr_output_value3,
rank() over (order by case when fhmr_input_value3 = '+++' then 1 else 0 end desc) as rn
from cdhud.fhmr_mapping_rule
where fhmr_input_value1 = '100'
and fhmr_input_value2 = '1'
and fhmr_input_value3 = '156';
should give us the '+++' row.
Thx

Similar Messages

  • Using one Cursor data in anothe rcursor

    Hi all,
    I have a requirement of processing the following.
    I have to select the rows for which few selected attributes are null. Then I need to take one more attribute for these rows and check that if the value of that row exists for any other row except this one. If that id exists for another row ,then I have to update two fields concatenate with the value.
    Ex:- suppose I have 3 rows for which all the three attributes are null. I have a value say CCN_ID which I have to check if that exists for any other row other than this. That means the CCN_IFD of these 3 rows should be checked for existence in any other rows in the same table. If any of the CCN_ID of these 3 rows exists, then I need to update that row with attribute (Sayx)CCN_ID concatenated by the CCN_ID.
    I wrote the following query to accomplish this.
    select CCN_ID from TEMP_ARTICLE_TABLE where ARTICLE_NO is null and COLOR_BOTTOM is NULL and COLOR_TOP is NULL and CCN_ID IN
    (SELECT DISTINCT CCN_ID from TEMP_ARTICLE_TABLE where CCN_ID in(select CCN_ID from TEMP_ARTICLE_TABLE where ARTICLE_NO is null and COLOR_BOTTOM is NULL and COLOR_TOP is NULL)
    and rowId not in(select rowId from TEMP_ARTICLE_TABLE where ARTICLE_NO is null and COLOR_BOTTOM is NULL and COLOR_TOP is NULL));
    But I was requested to write a Stored Procedure to do this. So, I am thinking of creating two cursors in which the first cursor's data is used in second one as follows.I am receiving compiling errors as follows while compiling t he code.Am I doing in the best possible approach? Is there any other better approach to accomplish my requirement?
    SP Code:-
    CREATE OR REPLACE PROCEDURE SP_ARTICLEDATA_CLEANING IS
    CURSOR Cur_CCNID is
    SELECT rowId,CCN_ID FROM TEMP_ARTICLE_TABLE WHERE ARTICLE_NO is null AND COLOR_TOP IS NULL AND COLOR_BOTTOM IS NULL;
    CURSOR Cur_ModCCNID IS
    SELECT DISTINCT CCN_ID FROM TEMP_ARTICLE_TABLE WHERE CCNID IN(Cur_CCNID.CCN_ID) AND rowId NOT IN(Cur_CCNID.rowId);
    BEGIN
    FOR rec in Cur_CCNID
    LOOP
    BEGIN
    UPDATE TEMP_ARTICLE_TABLE SET COLOR_TOP='FIX_'||rec.CCN_ID WHERE CCN_ID=rec.CCN_ID and rowId in(cur_CCNID.rowId);
    END;
    END LOOP;
    END;
    Errors:-
    [Error] PLS-00225 (5: 102): PLS-00225: subprogram or cursor 'CUR_CCNID' reference is out of scope
    [Error] ORA-00904 (5: 102): PL/SQL: ORA-00904: "CUR_CCNID"."ROWID": invalid identifier,
    [Error] PLS-00225 (11: 102): PLS-00225: subprogram or cursor 'CUR_CCNID' reference is out of scope,
    [Error] ORA-00904 (11: 102): PL/SQL: ORA-00904: "CUR_CCNID"."ROWID": invalid identifier
    Please guide me if there is abetter way or how to correct these errors.
    Thanks,
    Pavan.

    The basic answer to your problem is use SQL. A single SQL statement can very likely do the entire operation. There is no need for using row by row (called slow by slow) processing in PL/SQL.
    PL/SQL is inferior when it comes to crunching database data. That is the territory of the very powerful and very flexible SQL language. Or simply put: Maximise SQL. Minimise PL/SQL.*
    As for cursors - a cursor is the compiled and executable for a source code program (e.g. SQL statement). It is not a result set. This cursor/program outputs data. It even takes parameters in the form of bind variables. Which means the same cursor (executable) can be used repeatedly, even by different sessions, and executed with different parameters (called bind variables).
    Want to insert 10 million rows? Single SQL cursor (executable program). Called 10 million times. Each time with different parameters (bind variables). Via 10,000 different application sessions. At the same time.
    So a cursor calling a cursor - not really a sensible approach. Cannot be done the way you want in your code. Is used for unique processing requirements like running PL/SQL in parallel using pipe line table functions.
    Bottom line - whoever told you to write a stored proc to do this, is wrong. The correct solution is writing SQL to perform the processing and update you need. That SQL can then be placed inside PL/SQL in order to manage the execution of that SQL cursor and deal with validation of input parameters, handling errors, code instrumentation and so on.

  • How can I load a .xlsx File into a SQL Server Table using a Foreach Loop Container in SSIS?

    I know I've REALLY struggled with this before. I just don't understand why this has to be soooooo difficult.
    I can very easily do a straight Data Pump of a .xlsX File into a SQL Server Table using a normal Excel Connection and a normal Excel Source...simply converting Unicode to DT_STR and then using an OLE DB Destination of the SQL Server Table.
    If I want to make the SSIS Package a little more flexible by allowing multiple .xlsX spreadsheets to be pumped in by using a Foreach Loop Container, the whole SSIS Package seems to go to hell in a hand basket. I simply do the following...
    Put the Data Flow Task within the Foreach Loop Container
    Add the Variable Mapping Variable User::FilePath that I defined as a Variable and a string within the FOreach Loop Container
    I change the Excel Connection and its Expression to be ExcelFilePath ==> @[User::FilePath]
    I then try and change the Excel Source and its Data Access Mode to Table Name or view name variable and provide the Variable Name User::FilePath
    And that's when I run into trouble...
    Exception from HRESULT: 0xC02020E8
    Error at Data Flow Task [Excel Source [56]]:SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occured. Error code: 0x80004005.
    Error at Data Flow Task [Excel Source [56]]: Opening a rowset for "...(the EXACT Path and .xlsx File Name)...". Check that the object exists in the database. (And I know it's there!!!)
    I don't understand by adding a Foreach Loop Container to try and make this as efficient as possible has caused such an error unless I'm overlooking something. I have even tried delaying my validations and that doesn't seem to help.
    I have looked hard in Google and even YouTube to try and find a solution for this but for the life of me I cannot seem to find anything on pumping a .xlsX file into SQL Server using a Foreach Loop Container.
    Can ANYONE please help me out here? I'm at the end of my rope trying to get this to work. I think the last time I was in this quandry, trying to pump a .xlsX File into a SQL Server Table using a Foreach Loop Container in SSIS, I actually wrote a C# Script
    to write the contents of the .xlsX File into a .csv File and then Actually used the .csv File to pump the data into a SQL Server Table.
    Thanks for your review and am hoping and praying for a reply and solution.

    Hi ITBobbyP,
    If I understand correctly, you want to load data from multiple sheets in an .xlsx file into a SQL Server table.
    If in this scenario, please refer to the following tips:
    The Foreach Loop container should be configured as shown below:
    Enumerator: Foreach ADO.NET Schema Rowset Enumerator
    Connection String: The OLE DB Connection String for the excel file.
    Schema: Tables.
    In the Variable Mapping, map the variable to Sheet_Name, and change the Index from 0 to 2.
    The connection string for Excel Connection Manager is the original one, we needn’t make any change.
    Change Table Name or View name to the variable Sheet_Name.
    If you want to load data from multiple sheets in multiple .xlsx files into a SQL Server table, please refer to following thread:
    http://stackoverflow.com/questions/7411741/how-to-loop-through-excel-files-and-load-them-into-a-database-using-ssis-package
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • PL/SQL code written in db or in application...????

    Hi ,
    Which is the best...at performance , maintability... e.t.c.???
    To write PL/SQL as validation process of data values going to be inserted in db:
    1) as db trigger (before insert or update row-level trigger)
    2) in application level - in Forms10g...
    This PL/SQL code compares some pairs of data values only.... there DML statement....
    Both db server and App server are considered to be or in the same machine or in two machines very close to each other...
    Note: i use Db10g .
    Thanks...
    Sim

    It is all about moving parts. How many there are. Where they are located.
    The less moving parts, the less the complexity, the bugs - and the better the performance (less wheels to turn to crunch data), the easier the maintenance and the better the flexibility.
    Part of this is having the moving parts close to the data. Kind of obvious - what is faster? Shipping data from the db engine to a front-end application via a app server and web server? Or shipping the data from the db (SQL) engine to a PL/SQL process running as part and parcel of db instance? IPC is significantly faster than TCP/IP.
    Having the moving parts close to the data also means that it can scale with the data. Oracle is good at scalability. It is designed at its very core to be scalable. Partitions to reduce I/O. Shared Server to reduce o/s resource footprint per client session. Shared pool to re-use SQL. DB buffer to cache data. RAC to run multiple db instances for a single physical database. Etc. Etc.
    With the application moving parts being close to the data, it inherits this scalability. You can "multi-thread" application code using table pipelines. You have access to forking application code using background processes (DBMS_JOB and DBMS_SCHEDULER). You have bulk processing in order to transfer more data per SQL engine call, between the buffer cache and your application, and thus also reduce expensive context switching.
    This list goes on and on.
    All this is summarised into the following rules.
    Rule 1. Do it in SQL.
    Rule 2. Only when SQL cannot do it, use PL/SQL. (e.g. SQL is not suited for managing and controlling process flow - PL/SQL is)
    Rule 3. Only when PL/SQL cannot do it, use <insert-language-here> (e.g. PL/SQL cannot render a Windows grid on the client - but Delphi/C#//Java can).
    Also keep in mind that in the modern client-server paradigm we deal with a web architecture. Which means the client is a web browser. And this means that PL/SQL can pass the data (HTML) required by the client (web browser) to render the User Interface.
    Have a look at APEX (Oracle's Application Express) for how this works. http://apex.oracle.com
    All you need to develop web applications is APEX (a PL/SQL product suite in the database) and a web browser. Kickass stuff... :-)

  • PL/SQL and Java implementation

    Hi all,
    We need to implement a fast solution for reporting to our existing web application. Application is totally based on stored procedures written in PL/SQL.
    We have already java classes for drawing different types of charts
    named Chartdirector (http://www.advsofteng.com/)
    This library can create chart images in memory.
    I have already read about calling java classes from PL/SQL and SQLJ too.
    My questions are;
    1. Is this safe to include classes to database and create
    reports by using loadjava in real life. I mean not in theory, in real life?
    2. Is there any experiments of the audience about this method?
    3. Would it be any performance problems about this?
    4. Should i prefer to use any Java container and jsps instead of this method?
    I will continue to research for necessary steps after i could see the light :)
    Thank you to all,
    Regards,
    Gokhan

    > 1. Is this safe to include classes to database and create
    reports by using loadjava in real life. I mean not in theory, in real life?
    Yes.
    > 2. Is there any experiments of the audience about this method?
    Experiments and actual implementation - yes. It works. Granted, it does not always work as easy as running the Java code from the command line. But it does work.
    > 3. Would it be any performance problems about this?
    This is a potential problem with any software code.
    4. Should i prefer to use any Java container and jsps instead of this method?
    To be honest, I prefer not using Java for web graphics as I'm not impressed with the quality and flexibility of the graphs generated. I prefer using (and paying for commercial use) for the JPGraph classes for PHP - and using that from PL/SQL (via Zend Core for Oracle).
    Another option is to use SVG and generated SVG directly from PL/SQL. But this requires a browser plugin and SVG also does not look that great. (this is btw what Oracle's HTMLDB does and uses)

  • Three tier (mod pl/sql) vs. two tier (PL/SQL Gateway)

    I've been using 10g Database and 10g application server on separate servers for some time now.
    Going the two tier (11g) route has some attractions, but what are the disadvantages?
    The Oracle documentation I've seen says very little on making the decision, giving benefits as:
    Ease of configuration
    Included in the database
    No separate server installation
    - but no negatives.
    Does anyone have any real live experience of comparing the two options?
    I'm inclined to believe that three tier might have more tuning flexibility, better performance if each tier is on a different server. Maybe worse than two tier if on one server, assuming two tier eliminates communication overheads..
    Does pl/sql gateway have the caching ability of Apache/mod pl/sql - I assume not? - that could make a big difference.
    Any thoughts would be welcome...

    There are several key performance advantages of OHS over EPG. I'm working a lot with the EPG right now and pushing the XDB team to add several of these features (maybe in 11.2, possible backport, but don't count on it). I used recommendations from the yslow Firefox add-in to do some performance tuning. Here's there list of Best Practices:
    http://developer.yahoo.com/performance/rules.html
    - EPG does not add an "Expires" header. So, lets say you have 25 images in your page template, and none of them change. Each page view will still request those 25 images. They use etags, so you don't have to download the images, but your browser still makes the requests which is quite slow. From my testing, pages could be up to 4 times slower with the EPG with a pretty standard template. The XDB team is aware of this and working hard to resolve it.
    - EPG does not support gzip. This is another HUGE performance hit.
    Keep in mind you can't test any of those issue with debug mode in APEX, you really need to use a browser plugin such as Firebug + ySlow. The render speed from APEX's point of view will be the same, no matter what HTTP server you use.
    The other big on is mod_rewrite support. There is no way easily create friendly URLs for your apps. Another thing to consider is that a number of Identity Management systems, such as Oracle Access Manager (OAM) work by installing an Apache Module or in the case of IIS, some type of plugin (forget what they call it). There is no concept of this in EPG.
    IMHO, it's convenient for laptops, but I would never use it for production unless you needed some feature that it exposes, such as WebDav or FTP access to the XDB repository...
    Tyler

  • Hyperlink from PL SQL Report to a branch

    Hi,
    Is there a "good" way to hyperlink from PL SQL report to a branch?
    I cant use the regular report, as there is a lot of data and I need to control the layout.
    I would like to go to another page and pass data to one known field.
    Have tried googling and I found this page:
    http://www.lucidprocess.com/blog/building-hyperlinks-in-oracle-apex-reports/
    But it seems a bit "hacky", eg.. many places to edit if the page where to move or change in the future (or if one copies the page).
    Is there a more "clean" way to do this, or does one have to get his/her hands dirty for such a task?

    Thank you.
    I managed to write the other code too:
    "f?p='||:APP_ID||':686:'||:APP_SESSION||'::'||:DEBUG||'::P686_TBNR,P686_VIS_REGISTER:'||oppdrag.PE_TBNR_BET||'%2CHUSKNYTT"instead of firing the branch and using JS, I just set the two fields that change the logic and basis of the page. However as you see, the "flaw" with the code I wrote, is that if the fields or anything were to change, it will be harder to go in and edit reports that have this type of code.
    Have just started with APEX, but I must say I like it a lot..
    Just wish there where something inbetween the sql reports and the pl sql reports..
    The possibilities of the PL SQL Reports meeting the SQL Reports flexibility would be super.
    However, maybe it's good not everything is served on a plate, as this forces me to learn it more in depth. I also love challenges and I like making things properly :-)

  • How to develop a report in Crystal with flexible database name?

    Hello
    I am a Project Manager of a project of developing reports in Crystal 11.
    The idea is to develop reports on top of the content in MS SQL tables.
    The initial testing and demonstration to the customer is done within the Crystal development environment.
    In a later stage, we need to integrate the reports with C# WPF application, using Crystal control.
    We currently use ODBC connections.
    We want the flexibility to set the actual database dynamically, by using the "default database" of the ODBC connection (or any other way). In other words, we want not just the flexibility to change the database server, but work with different database names, like "ProductDB_TEST", "ProductDB_PROD" etc. - without changing the report.
    Unfortunately, we got the answer from the developer that the database name should be pre-defined for a given report. Although the connection can be set to another server, the DB name cannot be set dynamically.
    Looking into the "Database" -> "Show SQL Query" menu, we see the following piece inside the query:
    INNER JOIN "DATABASE_NAME"."dbo"."IncidentTypeSnapshotData"
    So it looks like the query itself contains the DB name.
    Is it really a limitation of Crystal, or rather the developer working on the project doesn't know the trick?
    Thanks for any hint
    Max

    CR CR 2011 / "Crystal reports For Visual Studio 2010", you are correct.
    Re. the database thinggy. You can connect to a database via ODBC, OLE DB or in some instances natively. Once a report is created you an change the datasource. A good sample app on how to do this is  csharp_win_dbengine / vb_win_dbengine. A link to the samples is here:
    Crystal Reports for .NET SDK Samples - Business Intelligence (BusinessObjects) - SCN Wiki
    More info on connecting to dbs and changing them is in the developer help files:
    SAP Crystal Reports .NET SDK Developer Guide
    SAP Crystal Reports .NET API Guide
    More info on CR APIs for .NET (applies to all versions of CR and VS):
    Crystal Reports for Visual Studio 2005 Walkthro... | SCN
    You can also use ADO .NET Datasets and in this way you handle the database connections in your app. A good sample is csharp_win_adodotnet (also available in VB) - same link as above.
    More info on datasets:
    Crystal Reports Guide To ADO.NET
    Crystal Reports for Visual Studio .NET - Walkthrough - Reporting Off ADO.NET Datasets
    For more complicated operations (e.g.; changing a report from ODBC to OLE DB, changing one table, etc., you will want to use the InProc RAS SDK that is also available in CRVS. Developer help files are here:
    Report Application Server .NET SDK Developer Guide
    Report Application Server .NET API Guide
    Sample apps are here:
    NET RAS SDK Samples - Business Intelligence (BusinessObjects) - SCN Wiki
    and here:
    Crystal Reports .NET In Process RAS (Unmanaged) SDK Sample Applications
    More info on RAS SDK:
    How to Use The RAS SDK .NET With In-Process RAS Server
    Lastly, do use the search box in the top right corner. I find simple search strings such as 'crystal net parameter' return best results (KBAs, Blogs, docs, wikis, discussions and more).
    - Ludek

  • How to use import parameter to be instead of SQL where sub-sentence ?

    I wrote a RFC to read data from SAP table. To fetch data flexibility, I want to use import parameter xx instead of where sub-sentence in SQL sentence.
       For example, "SELECT * FROM T WHERE XXX",  and "XXX" is a importing parameter.
       How can I use it.
       Thanks a lot.
       Frank.

    FUNCTION ZRFC_04.
    *"*"Local Interface:
    *"  IMPORTING
    *"     VALUE(TARGETTABLE) LIKE  MAKT-MAKTX
    *"     VALUE(TWHERE) LIKE  MAKT-MAKTX
    *"  EXPORTING
    *"     VALUE(ZRETURN) LIKE  MAKT-MAKTX
    *"  TABLES
    *"      TMP_TEST1 STRUCTURE  ZTEST1
      DATA:
      TRANSACTION_ID LIKE ARFCTID,
      V_VAILD(1) TYPE C,
      scond(80) TYPE c.
      V_VAILD = 'X'.
    GET PARAMETER twhere fields scond.
    The error " 'LATE FIELDS' expected, not 'TWHERE FIELDS' " generated.

  • Tricky SQL query... how to get all data in a single query?

    create table employee_definition (def_id number, def_name varchar(50));
    insert into employee_definition values (100, 'EMAIL');
    insert into employee_definition values (200, 'MOBILE_PHONE');
    insert into employee_definition values (300, 'HOME_PHONE');
    SQL> select * from employee_definition;
        DEF_ID DEF_NAME
           100 EMAIL
           200 MOBILE_PHONE
           300 HOME_PHONE
    create table employee_data (def_id number, def_value varchar(20), emp_id number);
    insert into employee_data values (100, '[email protected]', 123);
    insert into employee_data values (200, '01232222', 123);
    insert into employee_data values (300, '5555', 123);
    insert into employee_data values (100, '[email protected]', 666);
    insert into employee_data values (200, '888', 666);
    insert into employee_data values (300, '999', 666);
    insert into employee_data values (300, '444', 777);
    SQL> select * from employee_data;
        DEF_ID DEF_VALUE                EMP_ID
           100 [email protected]              123
           200 01232222                    123
           300 5555                        123
           100 [email protected]              666
           200 888                         666
           300 999                         666
           300 999                         777
    7 rows selected.I'm supposed to create a SQL that will return me the email, mobile_phone, and home_phone for a set of employees. The result will be something like this:
    EMPLOYEE ID | HOME_PHONE | MOBILE_PHONE | EMAIL
    123         |  5555  |    01232222      | [email protected]
    666         |  999  |    888      | [email protected]
    777         |  444  |    null     | nullThe thing I'm finding difficulty here is that the same column is used to store different values, based on the value in employee_definition table (something like a key/value pair). If I do:
    SQL> select emp_id, def_value as email from employee_data, employee_definition
      2  where employee_data.def_id = employee_definition.def_id
      3  and employee_definition.def_name = 'EMAIL';
        EMP_ID EMAIL
           123 [email protected]
           666 [email protected]'s partially ok.. I'm just getting the definition for 'EMAIL'. But how can I get all the values in a single query, knowing that the column stores different values based on def_name?

    Oh no, not again.
    Entity attribute models always seem like a great idea to people who have been in the profession for five minutes and lack any kind of fundamental knowledge.
    It staggers me that someone with 2,345 posts still believes "you need a 'detail table' for [storing multiple telephone numbers]"
    "A person can have multiple telephone numbers" is not an excuse to build a tired person_attribute table. Niether is the bizarre proposal by someone with over 4k posts who should know better in an earlier post that EAV models are necessary to support temporal fidelity.
    Taken to it's logical conclusion, EAV modelling leads to just two application tables. THINGS and THING_ATTRIBUTES. And when you consider that a THING_ATTRIBUTE is also a THING, why not roll those two tables up into one also? Hmmm, what does THINGS and THING_ATTRIBUTES look like? I know, TABLES and COLUMNS. Who would've guessed? SQL already provides the completely flexible extensible attribute model the advocates of EAV proscribe. But it also has data types, physical data independence, constraints and an efficient query language which EAV does not.
    EAV modelling errodes the semantics of the attributes which are bundled into the "attribute" table.
    There is no point in storing 12 different phone numbers with implied functional dependency to unconstrained and often repeating notional attributes like "MOBILE", "LANDLINE", "WORK", err, "WORK2", err, "MOBILE2", err, ... when this phone type attribute has no semantic value. When you want to call someone, you invariably want to retrive the prefered_phone_number which may depend on a time of day, or a call context.
    These things need to be modelled properly (i.e normalised to BCNF) within the context of the database.

  • Flexible Report - choose the cols (and # of cols) on the fly - PDF

    Hi
    I have a user community who would like to be able to choose the columns in a "Flexible Report" when they want to run a report to output to the browser, and then to PDF.
    Now, I've played around a bit with setting up a number of page items - say 5 - called COL1..COL5 and setting the values from list-boxes and then running a report SQL:-
    Select :col1 COL1,
    :col5 COL5
    from my_table
    etc.
    This produces the data, but its pretty tacky and the column names are COL1, COL2.... - also tacky, and the number of columns is fixed.
    Anyone got any slick ideas on how to do this so the column names and # are variable?
    I have tried a javascript example (posted elsewhere in the forum) that runs the report with all it's columns, and allows the columns to be shown / hidden depending on the state of a ccheckbox. This is very cool but it doesn't translate the flexibility to PDF output.
    For those who have used Discoverer - I need a mini-Discoverer reportwriter!!
    Any input greatfully received - Thanks in anticipation.
    Mike
    Message was edited by:
    Mike Mac

    hi
    see this thread,it may help you...
    column name/table name
    regards
    vally.s

  • SQL Server 2008 Express R2 exceeds the storage capacity of Oracle XE

    Hello, here is a novelty: SQL Server 2008 Express R2 exceeds the storage capacity of Oracle XE, XE you still limited to 4GB of data from, the competition gets better and offers a bit more, XE remains the same, hopefully Oracle 11g XE exceeds storage and memory to SQL Server.
    [http://www.microsoft.com/express/Database/]
    Roberto.
    Edited by: rober584812 on Apr 27, 2010 5:09 PM
    Edited by: rober584812 on Apr 27, 2010 5:09 PM

    Yes, I do good work is correct, I have not thought to migrate from Oracle XE to SQL Server Express because it could not work with the flexibility of PL / SQL and other Oracle features own, however, SQL Server 2008 R2 Express if takes the lead XE in storage, at which Oracle Corp. will match or exceed, offer more storage in response to competition and not only compensate for the limitation of 4GB with Excellency good features.
    Roberto.

  • Contained Database Users are now available in Azure SQL Database Update preview

    Contained Database Users should be of particular help for people migrating to Azure SQL Database. At the moment, this is a preview release but you can start testing. Here is the announcement of the
    preview with links to more information.
    New SQL Database public preview with new Standard-tier performance level
    Previously announced in November 2014 and now available for customers to try, the
    new
    public preview of SQL Database improves the compatibility of SQL Server applications for Azure SQL Database. Details of this preview are available on the
    SQL
    Database documentation webpage, including the following key enhancements: easier management of large databases to support heavier workloads with parallel queries
    and online indexing, support for programmability functions like CLR and XML index to support more robust application design, improved monitoring and troubleshooting with XEvents and 100 new Dynamic Management Views (DMV), and more performance in the Premium
    tier.
    To try this preview, please sign up via the Preview
    features webpage. Only SQL Database servers with a mix of one or more Basic, Standard, or Premium (not Web or Business) databases are compatible and eligible to
    upgrade to the preview. Please note that any move of an existing Basic, Standard, or Premium database into this preview is irreversible; we recommend that you create a database copy or leverage test databases on any server enrolled in this preview.
    A new Standard-tier performance level, S3, is also available in this preview which gives you more pricing flexibility between Standard and Premium. S3 will deliver 100 Database Throughput Units (DTU) and all the features available in the Standard tier. Please
    note that S3 will appear on your bill as a multiple of S2 until further notice.
    For more information, please visit the SQL
    Database webpage and the
    Microsoft
    Azure Blog. For a comprehensive look at pricing, please visit the
    SQL
    Database pricing webpage.
    Rick Byham, Microsoft, SQL Server Books Online, Implies no warranty

    Hello Rick
    That is great, one thing I'd like to ask, does it support SSMS,SSDT?
    No sign of that yet, that I’ve seen.....
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • PL/SQL Evaluation problem of where clause in case of  NUMBER column type

    I found the following problem in Oracle® Database 2 Day Developer's Guide 11g Release 1 (11.1) B28843-04:
    The sole parameter of function eval_frequency is employee_id IN employees.employee_id%TYPE.
    An ORA-01422 exception occurs when the execution reaches the following select command
    SELECT e.hire_date
    INTO hire_date
    FROM employees e
    WHERE employee_id= e.employee_id;
    A possible cause of the error is that the type of employee_id is NUMBER while the employees.employee_id is NUMBER(6,0) . The result of the selection is the same as there were no WHERE clause at all.
    Everything worked fine, when I declared a temporary variable of NUMBER(6,0) for storing the actual parameter of function and used this variable in the where clause, but I consider this "solution" as being no solution.
    It is pointless to use %TYPE parameter of a function for flexibility if I must degrade this flexibility by a fixed declaration of a temporary variable of the same type as the column in question.
    What is wrong?
    The Developer'Guide I used, the Oracle Sql Developer I used or the PL/SQL version ?

    Hi,
    Welcome to the forum!
    user8949829 wrote:
    A possible cause of the error is that the type of employee_id is NUMBER while the employees.employee_id is NUMBER(6,0) . The result of the selection is the same as there I don't think so. The variable employee_id is defined as having the exact same type as the eponymous column. Even if it didn't, I believe Oracle will always implicity convert between datatypes when possible, rounding if necessary. That may cause errors, but it isn't causing this error.
    No, the error has nothing to do with the data type. It has to do with the ambiguity of employee_id: is it a column, or is it a variable?
    The default is that it means the column name, so
    WHERE   employee_id = e.employee_idis equivalent to saying
    WHERE   e.employee_id = e.employee_idwhich isn't quite the same thing as not having a WHERE clause; rows with NULL employee_id would still be excluded, if there were any.
    I think it's best not to use variable names that are the same as column names. You could call the variable v_employee_id, or, since it's an IN-argument, in_employee_id.
    If you must use a variable that can be mistaken for a column, then qulaify it with the name of the procedure, like this:
    WHERE   eval_frequency.employee_id = e.employee_id
    Everything worked fine, when I declared a temporary variable of NUMBER(6,0) for storing the actual parameter of function That makes sens. You probably gave that variable a name that couldn't be mistaken for a column in the table.
    Edited by: Frank Kulash on Jan 12, 2011 8:27 PM

Maybe you are looking for