Replacement for SQL%ROWCOUNT

Hi All,
Please let me know whether there is any other function that does the function of SQL%ROWCOUNT, without the usage of packages or procedures.
My intention is to get the count of the number of rows that got updated.
Thanks,
Vijayalakshmi

Why can't you do it in an anonymous PL/SQL block then?
Eg:SQL> create table test (col1 number);
Table created.
SQL> insert into test values (1);
1 row created.
SQL> insert into test values (2);
1 row created.
SQL> commit;
Commit complete.
SQL> declare
  2   v_row_count number;
  3  begin
  4   update test
  5   set    col1 = 3
  6   where  col1 != 3;
  7   v_row_count := SQL%ROWCOUNT;
  8   insert into test values (v_row_count);
  9  end;
10  /
PL/SQL procedure successfully completed.
SQL> select * from test;
      COL1
         3
         3
         2
3 rows selected.

Similar Messages

  • SQL Developer replacement for Query Builder?

    Our developers use query builder to quickly develop sql code w/conditions, joins, where clauses, etc. I just installed release 2 of SQL Developer and am trying to figure out if this is a viable replacement for query builder. Is there a graphical "query builder" component as part of SQL Developer?

    Barry, I would agree that it should be clean and easy to use. I started using Data Browser 2.0 and migrated to Query Builder 6.0.7.1.0 and while Query Builder is no longer it's own product it's integrated into Reports Developer 10g. The basic layout and operation has been consistent thru the life cycle and I would think you'd keep the "look and feel" consistent with what's currently available. (might simplify integrating it too). My biggest complaint with Query Builder 6.0.7 (se don't use the 10g version) is that it errors when opening a schema with more than 8192 objects. As an E-Business Suite 11.5.10 customer this is an issue for our developers when logging in as the standard APPS user.
    So here would be my "wish list"
    1. consistent look and feel with "older" versions of Browser/Query Builder
    2. removal of 8192 object limitation
    3. ability to open older .brw format files
    Certainly more improvements would be possible by integrating with SQL Developer and would be welcomed - this wish list is coming from a user who has MANY .brw files in the older 6.0.7 format.

  • Sqlcode and sql%rowcount as test conditions

    I am translating a procedure from Ingres to oracle. I use the Oracle sqlcode and sql%rowcount
    variables in place of Ingres's iierrornumber and iirowcount.
    I replaced the ingres names with the oracle names, and now am wondering how how oracle uses these
    values.
    I am using DBMS_OUTPUT.PUT_LINE to display the contents of sqlcode and sql%rowcount.
    What happens when I get "no rows selected". I run the same query in standalone command line
    and get no rows selected .. and it's correct because I don't have data that matches).
    But when I run it in the procedure, I don't get any values in sqlcode or sql%rowcount ... the
    program just exits. The author was using the ingres code to test if there was data there and then
    continue to do something else.
    If you get no rows returned, does that mean, the sqlcode is 0 ? and the sql%rowcount is emtpy?
    which is why my DBMS_OUTPUT.PUT_LINE doesn't display anything ?
    my example........
    before this section in the procedure ... after running a different select that returns data ...
    I am using the DBMS to debug the values I have going in ...
    DBMS_OUTPUT.Put_line ('BEFORE sqlcode is' || sqlcode);
    DBMS_OUTPUT.Put_line ('BEFORE sqlrowcount is' || sql%rowcount);
    Select value from table
    where condition ='Y'; > this select will product "no rows return" and ignore the 2 DBMS_OUTPUTS beow ---
    DBMS_OUTPUT.Put_line ('AFTER sqlcode is' || sqlcode);
    DBMS_OUTPUT.Put_line ('AFTER sqlrowcount is' || sql%rowcount);
    When i run the procedure... I get the DBMS BEFORE statements, and nothing afterwards ...
    BEFORE sqlcode is 0
    BEFORE sqlrowcount is 1 >>>> that is all that displays
    Since I am getting the "no rows returned" ... how does oracle handle it to anyone's experience ?
    Thank you. I am learning much from your comments and information.

    Thanks to your answers .... The procedure is below. I've had to hand type it in, so typos are my mistakes.
    The procedure compiles . When there is data to be found, I get the DBSM_OUTPUT lines of code ....
    msg_read is Y
    sql_error is 0
    row_count is 1
    p_vol_id is 880091
    When I enter in a file name that does not return ANY rows back I will get the msg_read
    DBMS_OUTPUT line
    msg_read is Y
    Call completed.
    It doesn't show any 0 for sqlcode or sql%rowcount
    The original author used the Ingres return codes as input to process the rest of the code...
    It seems like oracle bounces the procedure once there are no rows to be found.
    I just added this part ....
    having an exception in the clause shows that Oracle is bouncing it to the WHEN OTHERS
    exception ...
    Any ideas of how to get Oracle not to do this ?
    I am trying to keep things simple, and all I am testing for is if I get records back the code does things,
    if not, I do something else
    create or replace procedure userfile(vms_fil_name IN varchar2, msg_read IN Varchar2)
    authid current_user
    is
    p_vms_file_name varchar2(255);
    p_vol_id varchar2(255);
    p_orig_id varchar2(255);
    p_incoming_message varchar2(255);
    sql_error number;
    n_count number;
    begin
    p_vms_fil_nam :=vms_fil_nam;
    DBMS_OUTPUT.PUT_LINE ('msg read is '|| msg_read); >> verify incoming parameter
    IF (msg_read ='Y')
    then
    select vol_id,
    orig_id,
    incoming_message
    into p_vol_id,
    p_orig_id,
    p_incoming_message
    from
    table one_table a
    where
    a.vms_fil_nam = p.vms_fil_nam
    and incoming_msg ='I';
    n_count :='sql%rowcount';
    sql_error :='sqlcode;
    DBMS_OUTPUT.PUT_LINE ('row count is '|| n_count);
    DBMS_OUTPUT.PUT_LINE ('sql_error is '|| sql_error);
    DBMS_OUTPUT.PUT_LINE ('p_vol_id is '|| p_vol_id);
    end if;
    exception >>>> just added this part as the last test....
    when others
    DBMS_OUTPUT.PUT_LINE ('other condition has been met');
    end;
    call userfile ('GEORGE','Y');

  • Sql%rowcount

    I have an insert-as-select in a loop. Does rowcount give the number of inserts into the table tab?
    Thanks, Ven.
    for i in 1 .. n
    loop
    insert into tab (a, b, c, d)
    select 123, x.b, 0, abc_id.seq.nextbal
    from class x, room y
    where x.id = y.id and x.id = i;
    end loop;
    no_of_inserts := sql%rowcount

    Hi,
    SQL%ROWCOUNT gives the number of rows, affected by the last DML statement. To calculate all rows:
    no_of_inserts := 0;
    for i in 1 .. n
    loop
      insert into tab (a, b, c, d)
        select 123, x.b, 0, abc_id.seq.nextbal
        from class x, room y
        where x.id = y.id and x.id = i;
      no_of_inserts := no_of_inserts + sql%rowcount;
    end loop;Assuming Class.Id is of INTEGER type, whole this loop can be replaced with the single statement:
      insert into tab (a, b, c, d)
        select 123, x.b, 0, abc_id.seq.nextbal
        from class x, room y
        where x.id = y.id and x.id BETWEEN 1 AND n;
      no_of_inserts := sql%rowcount;Regards,
    Dima

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • GeoRaptor 3.0 for SQL Developer 3.0 and 2.1 has now been released

    Folks,
    I am pleased to announce that, after 5 months of development and testing, a new release of GeoRaptor for SQL Developer 2.1 and 3.0 is now available.
    GeoRaptor for SQL Developer 3 is available via the SQL Developer Update centre. GeoRaptor 3 for SQL Developer 2.1 is being made available
    via a download fro the GeoRaptor website.
    No release notes have been compiled as the principal developer (oops, that's me!) is currently busy doing real work for a change (another 3 weeks), earning a living
    and keeping the wolves at bay. More extensive notes (with images) will be compiled when I get back. (Unless anyone is offering! See next.)
    We are still looking for people to:
    1. Provide translations of the English dialog menus etc.
    2. Write more extensive user documentation. If you use a particular part of GeoRaptor a lot and think
    you have found out all its functionality and quirks, contact us and offer to write a few pages of
    documentation on it. (Open Office or Microsoft Word is fine.) Easiest way to do this is to simply
    make screen captures and annotate with text.
    3. Conduct beta testing.
    Here are the things that are in the new release.
    New functionality:
    Overhaul of Validation Functionality.
    1. User can specify own validation SELECT SQL as long as it returns three required columns. The SQL is thus totally editable.
    2. Validation update code now allows user to associate a PL/SQL function with an error number which is applied in the UPDATE SQL.
    3. UPDATE SQL can use WHERE clause of validation SELECT SQL (1) to update specific errors.
       NOTE: The generated UPDATE statement can be manually edited. It is NEVER run by GeoRaptor. To run any UPDATE, copy the statement
       to the clipboard and run in an appropriate SQL Worksheet session within SQL Developer.
    4. Main validation table allows:
       a. Sorting (click on column header) and
       b. Filtering.
       c. Copying to Clipboard via right mouse click sub menu of:
          - Geometry's SDO_ELEM_INFO array constructor.
          - SDO_GEOMETRY constructor
          - Error + validation string.
       d. Access to Draw/Zoom functions which were previously buttons.
       e. Added a new right mouse click menu "Show Feature's Individual Errors" that gathers up all the errors
          it can process - along with the ring / element that is host to the error (if it can) - and displays
          them in the Attribute/Geometry tabs at the bottom of the Map Window (where "Identify" places its results).
          The power of this will be evident to all those who have wanted a way of stepping through errors in a geometry.
       f. Selected rows can now be deleted (select rows: press <DELETE> key or right mouse click>Delete).
       g. Table now has only one primary key column, and has a separate error column holding the actual error code.
       h. Right mouse click men added to table menu to display description of error in the new column (drawn from Oracle documentation)
       i. Optimisations added to improve performance for large error lists.
    5. Functionality now has its own validation layer that is automatically added to the correct view.
       Access to layer properties via button on validation dialog or via normal right mouse click in view/layer tree.
    Improved Rendering Options.
    1. Linestring colour can now be random or drawn from column in database (as per Fill and Point colouring)
    2. Marking of SDO_GEOMETRY objects overhauled.
       - Ability to mark or LABEL vertices/points of all SDO_GEOMETRY types with coordinate identifier and
         option {X,Y} location. Access is via Labelling tab in layer>properties. Thus, coordinate 25 of a linestring
         could be shown as: <25> or {x,y} or <25> {x,y}
       - There is a nice "stacked" option where the coordinate {x,y} can be written one line below the id.
       - For linestrings and polygons the <id> {x,y} label can be oriented to the angle between the vectors or
         edges that come in, and go out of, a vertex. Access is via "Orient" tick box in Labelling tab.
       - Uses Tools>Preferences>GeoRaptor>Visualisation>SDO_ORDINATE_ARRAY bracket around x,y string.
    3. Start point of linestring/polygon and all other vertices can be marked with user selectable point marker
       rather than previously fixed markers.
    4. Can now set a NULL point marker by selecting "None" for point marker style pulldown menu.
    5. Positioning of the arrow for linestring/polygons has extra options:
       * NONE
       * START    - All segments of a line have the arrow positioned at the start
       * MIDDLE   - All segments of a line have the arrow positioning in the middle.
       * END      - All segments of a line have the arrow positioning in the END.
       * END_ONLY - Only the last segment has an arrow and at its end.
    ScaleBar.
    1. A new graphic ScaleBar option has been added for the map of each view.
       For geographic/geodetic SRIDs distances are currently shown in meters;
       For all SRIDs an attempt is made to "adapt" the scaleBar units depending
       on the zoom level. So, if you zoom right in you might get the distance shown
       as mm, and as you zoom out, cm/m/km as appropriate.
    2. As the scaleBar is drawn, a 1:<DEMONINATOR> style MapScale value is written
       to the map's right most status bar element.
    3. ScaleBar and MapScale can be turned off/on in View>Properties right mouse
       click menu.
    Export Capabilities.
    1. The ability to export a selection from a result set table (ie result of
       executing ad-hoc SQL SELECT statement to GML, KML, SHP/TAB (TAB file
       adds TAB file "wrapper" over SHP) has been added.
    2. Ability to export table/view/materialised view to GML, KML, SHP/TAB also
       added. If no attributes are selected when exporting to a SHP/TAB file, GeoRaptor
       automatically adds a field that holds a unique row number.
    3. When exporting to KML:
       * one can optionally export attributes.
       * Web sensitive characters < > & etc for KML export are replaced with &gt; &lt; &amp; etc.
       * If a column in the SELECTION or table/view/Mview equals "name" then its value is
         written to the KML tag <name> and not to the list of associated attributes.
         - Similarly for "description" -> <description> AND "styleUrl" -> <styleUrl>
    4. When exporting to GML one can optionally export attributes in FME or OGR "flavour".
    5. Exporting Measured SDO_GEOMETRY objects to SHP not supported until missing functionality
       in GeoTools is corrected (working with GeoTools community to fix).
    6. Writing PRJ and MapInfo CoordSys is done by pasting a string into appropriate export dialog box.
       Last value pasted is remembered between sessions which is useful for users who work with a single SRID.
    7. Export directory is remembered between sessions in case a user uses a standard export directory.
    8. Result sets containing MDSYS.SDO_POINT and/or MDSYS.VERTEX_TYPE can also be written to GML/KML/SHP/TAB.
       Example:
       SELECT a.geom.sdo_point as point
         FROM (SELECT sdo_geometry(2002,null,sdo_point_type(1,2,null),sdo_elem_info_array(1,2,1),sdo_ordinate_array(1,1,2,2)) as geom
                 FROM DUAL) a;
       SELECT mdsys.vertex_type(a.x,a.y,a.z,a.w,a.v5,a.v6,a.v7,a.v8,a.v9,a.v10,a.v11,a.id) as vertex
         FROM TABLE(mdsys.sdo_util.getVertices(mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(1,1,2,2)))) a;
    9. A dialog appears at the end of each export which details (eg total) what was exported when the exported recordset/table contains more
       than on shape type. For example, if you export only points eg 2001/3001 from a table that also contains multipoints eg 2005/3005 then
       the number of points exported, and multipoints skipped will be displayed.
    10. SHP/TAB export is "transactional". If you set the commit interval to 100 then only 100 records are held in memory before writing.
        However, this does not currently apply to the associated DBASE records.
    11. SHP/TAB export supports dBase III, dBase III + Memo, dBase IV and dBase IV + Memo.
        Note: Memo allows text columns > 255 characters to be exported. Non-Memo formats do not and any varchar2 columns will be truncated
        to 255 chars. Some GIS packages support MEMO eg Manifold GIS, some do not.
    12. Note. GeoRaptor does not ensure that the SRID of SDO_GEOMETRY data exported to KML is in the correct Google Projection.
        Please read the Oracle documentation on how to project your data is this is necessary. An example is:
        SELECT OBJECTID,
               CODIGO as name,
               NOME as description,
               MI_STYLE,
               SDO_CS.TRANSFORM(shape,'USE_SPHERICAL',4055) as shape
          FROM MUB.REGIONAL;
    13. NOTE: The SHP exporter uses the Java Topology Suite (JTS) to convert from SDO_GEOMETRY to the ESRI Shape format. JTS does not handle
        circular curves in SDO_GEOMETRY objects you must "stroke" them using sdo_util.arc_densify(). See the Oracle documentation on how
        to use this.
    Miscellaneous.
    1. Selection View - Measurement has been modified so that the final result only shows those geometry
       types that were actually measured.
    2. In Layer Properties the Miscellaneous tab has been removed because the only elements in it were the
       Geometry Output options which have now been replaced by the new GML/KML/etc export capabilities.
    3. Shapefile import's user entered tablename now checked for Oracle naming convention compliance.
    4. Identify based on SDO_NN has been removed from GeoRaptor given the myriad problems that it seems to create across versions
       and partitioned/non-partitioned tables. Instead SDO_WITHIN_DISTANCE is now used with the actual search distance (see circle
       in map display): everything within that distance is returned.
    5. Displaying/Not displaying embedded sdo_point in line/polygon (Jamie Keene), is now controlled by
       a preference.
    6. New View Menu options to switch all layers on/off
    7. Tools/Preferences/GeoRaptor layout has been improved.
    8. If Identify is called on a geometry a new right mouse click menu entry has been added called "Mark" which
       has two sub-menus called ID and ID(X,Y) that will add the labeling to the selected geometry independently of
       what the layer is set to being.
    9. Two new methods for rendering an SDO_GEOMETRY object in a table or SQL recordset have been added: a) Show geometry as ICON
       and b) Show geometry as THUMBNAIL. When the latter is chosen, the actual geometry is shown in an image _inside_ the row/column cell it occupies.
       In addition, the existing textual methods for visualisation: WKT, KML, GML etc have been collected together with ICON and THUMBNAIL in a new
       right mouse click menu.
    10. Tables/Views/MViews without spatial indexes can now be added to a Spatial View. To stop large tables from killing rendering, a new preference
        has been added "Table Count Limit" (default 1,000) which controls how many geometry records can be displayed. A table without a spatial
        index will have its layer name rendered in Italics and will write a warning message in red to the status bar for each redraw. Adding an index
        which the layer exists will be recognised by GeoRaptor during drawing and switch the layer across to normal rendering.
    Some Bug Fixes.
    * Error in manage metadata related to getting metadata across all schemas
    * Bug with no display of rowid in Identify results fixed;
    * Some fixes relating to where clause application in geometry validation.
    * Fixes bug with scrollbars on view/layer tree not working.
    * Problem with the spatial networks fixed. Actions for spatial networks can now only be done in the
      schema of the current user, as it could happen that a user opens the tree for another schema that
      has the same network as in the user's schema. Dropping a drops only the network of the current connected user.
    * Recordset "find sdo_geometry cell" code has been modified so that it now appears only if a suitable geometry object is
      in a recordset.  Please note that there is a bug in SQL Developer (2.1 and 3.0) that causes SQL Developer to not
      register a change in selection from a single cell to a whole row when one left clicks at the left-most "row number"
      column that is not part of the SELECT statements user columns, as a short cut to selecting a whole row.  It appears
      that this is a SQL Developer bug so nothing can be done about it until it is fixed. To select a whole row, select all
      cells in the row.
    * Copy to clipboard of SDO_GEOMETRY with M and Z values forgot has extraneous "," at the end.
    * Column based colouring of markers fixed
    * Bunch of performance improvements.
    * Plus (happily) others that I can't remember!If you find any bugs register a bug report at our website.
    If you want to help with testing, contact us at our website.
    My thanks for help in this release to:
    1. John O'Toole
    2. Holger Labe
    3. Sandro Costa
    4. Marco Giana
    5. Luc van Linden
    6. Pieter Minnaar
    7. Warwick Wilson
    8. Jody Garnett (GeoTools bug issues)
    Finally, when at the Washington User Conference I explained the willingness of the GeoRaptor Team to work
    for some sort of integration of our "product" with the new Spatial extension that has just been released in SQL
    Developer 3.0. Nothing much has come of that initial contact and I hope more will come of it.
    In the end, it is you, the real users who should and will decide the way forward. If you have ideas, wishes etc,
    please contact the GeoRaptor team via our SourceForge website, or start a "wishlist" thread on this forum
    expressing ideas for future functionality and integration opportunities.
    regards
    Simon
    Edited by: sgreener on Jun 12, 2011 2:15 PM

    Thank you for this.
    I have been messing around with this last few days, and i really love the feature to pinpoint the validation errors on map.
    I has always been so annoying to try pinpoint these errors using some other GIS software while doing your sql.
    I have stumbled to few bugs:
    1. In "Validate geometry column" dialog checking option "Use DimInfo" actually still uses value entered in tolerance text box.
    I found this because in my language settings , is the decimal operators
    2. In "Validate geometry column" dialog textboxs showing sql, doesn't always show everything from long lines of text (clipping text from right)
    3. In "Validate geometry column" dialog the "Create Update SQL" has few bugs:
    - if you have selected multiple rows from results and check the "Use Selected Geometries" the generated IN-clause in SQL with have same rowid (rowid for first selected result) for all entries
    Also the other generated IN clause in WHERE-clause is missing separator if you select more than one corrective function
    4. "Validate geometry column" dialog stays annoyingly top most when using "Create Update SQL" dialog

  • SQL Server 2005 replaced with SQL Server 2014 trying to connect front end Access as guest (read only ODBC)

    We have replaced a SQL Server 2005 with a SQL Server 2014 (new physical server.)  Have the new server set up to use SQL Server login OR Windows user login. Had old server connecting (for a particular DB) to front end Access (2010 or 2013) as guest for
    anyone logged into the Windows NT Network with a read only ODBC connection. Have the DB in the new server set to include guest as db_datareader (with only SELECT permission for the securables of each table and view being linked) but when any Windows user not
    specifically listed as a SQL DB user tries to use the front end they get an error of:
    Microsoft SQL Server Login
    Connection failed:
    SQL State: '28000'
    SQL Server Error: 18456
    [Microsoft][OCBC SQL Server Driver][SQL Server] Login failed for user {domain\user}.
    After closing that pop-up window a server login window appears. Of course, since the guest user is not specifically listed as a user in the DB that fails also. It seems like there should be a very simple solution to this, but I can't seem to find it. I want
    to allow anyone logged in on the Windows system (locally) to be able to open the MS Access file (on their work station machine) and run their own (read only; select) queries on the SQL Server database. Any suggestions?
    Thanks a billion in advance ----

    Thanks for the response Olaf. I have now spent weeks researching this. I realize that using the guest account in most situations is not advised. As mentioned, I have restricted the guest account to allow the db_datareader role only, and have explicitly denied
    all other roles, as well as allowing select only, and still have no access for the guest account.
    The suggested fix in the second link you provided, of using Windows groups is not plausible for my situation either. We are a scientific field research institution, with a few long term users and lots of users that may have Windows accounts for a few months,
    and then they are gone. It would be a nightmare for the network tech to try to keep a group account up to date, and we need to give access (read only, of course) to anyone logged into the system. Realize that the ONLY access of any kind to this database is
    thru MS Access ACCDB, using a (by default) read only OCDB connection.
    This type of access is used particularly because researchers need to be able to set up their own queries, and the MS Access query interface is particularly convenient for people who are not themselves SQL experts, yet are trying to get some very advanced
    levels of output. Putting the database online is not practical because then we are back to the need for a comprehensive query interface, and just picking up general subsets of the data online (from a basic web page search feature) would be out of the question,
    since the result set would involve hundreds of thousands if not millions of records.
    So - that said - what exactly would you suggest, assuming we don't have the funds to buy a whole new system, and have spent plenty of money with Microsoft's Enterprise level MS Office so that all work stations have MS Access, and Microsoft's SQL Server,
    as well as running our network on Microsoft's network software.

  • Dyn sql execute immediate  A plsql block returning sql%rowcount of rows inserted

    I would like to get the number of rows inserted in a dynamic sql statement like following.
    execute immediate
    'begin insert into a(c1,c2,c3)
    (select ac1,ac2,ac3 from ac where ac1=:thekey
    UNION select tc1,tc2,tc3 from tc where tc1=:otherkey); end;' using in key1,in key2;
    I have tried per the oracle8i dyn sql chapter in doc RETURN sql%rowcount won't compile.
    also add this to statement 'returning sql%rowcount into :rowcount'
    no luck.
    any ideas.?
    thanks.

    Quick comment first - I'm not sure why you are even using dynamic SQL here. There is nothing dynamic about your statement. It is equivalent to just:
    insert into a (c1, c2, c3)
      select ac1, ac2, ac3
        from ac
       where ac1 = key1
      UNION
      select tc1, tc2, tc3
        from tc
       where tc1 = key2;Unless you are really dynamically changing a table or column name here and just didn't show it in your example, you certainly don't want the overhead of NDS for this insert in this situation.
    In any case, you just need to evaluate SQL%ROWCOUNT in the next statement.
    insert ...;
    if sql%rowcount = 0 then
      -- nothing was inserted
    end if;

  • Announcement: OrindaBuild 5.0 Extension for SQL Developer 1.5.1

    Folks,
    OrindaBuild is now available as an extension for SQL Developer 1.5.1.
    OrindaBuild creates Java source code to run your existing PL/SQL. This is a non-trivial task and for large projects can consume hundreds on man hours as well as delay development.
    OrindaBuild does roughly the same thing that JPublisher does, but writes human readable code, doesn't need SQLJ and doesn't require that you use oracle TYPE objects as parameters for records and arrays.
    Another way of thinking of OrindaBuild is "We reach that parts of your application that Hibernate can't".
    OrindaBuild is available as extensions for JDeveloper and SQL Developer 1.1. After an unreasonably long gestation OrindaBuild is now available as an extension for SQL Developer 1.5.1 The delay has been caused by us rewriting both the JDeveloper and SQL Developer extensions so that code base has now been unforked. We've also standardized the functionality with that of our recently upgraded Eclipse extension.
    To install the demo select 'Help/Check for updates' and then use the 'Add' button to create an update center for Orinda Software using this URL:
    http://www.orindasoft.com/public/sqldev15Center.xml
    Functionality:
    The demo version is fully playable, with the only limitation being that it expires after 1 month. You can install the demo multiple times.
    In addition to generating Java to call PL/SQL it also allows you to create code to run any SQL statement and access database tables.
    OrindaBuild generates code for PL/SQL procedures that take %ROWTYPE and Package Records as parameters. Generated code uses a library. If you buy OrindaBuild you get the source code for the library and end up with a 100% source code solution - i.e. there are no mysterious runtime binary dependencies.
    Limitations:
    The extension does not work with versions of SQLDeveloper prior to 1.5.1 (build 5440). We expect you to buy a licence if you deploy generated code in a production environment.
    For more information see
    http://www.orindasoft.com/public/sdefeatures.php4
    David Rolfe

    It is possible that the patch has replaced you shortcut with one pointing to the version within the 11g home. Try running sqldeveloper directly from the executable in the 151 directory.

  • Question about Find and Replace PL/SQL option in Forms Builder

    Dear All
    I have a small question .
    I am using Find and Replace PLSQ/SQL in forms Builder ( Edit - Find and Replace PLSQL) to search any code in a form.
    Let say I am searching for EMP .
    Then all the places is coming where EMP is used as well as EMPloyee , EMPloyee_Desc tables also coming .
    So is there any way to serach only the EMP part .

    If you look closely to the right of the "Find What" field, you should see a button - "Expression". This allows you to add some logic to your search. Alternatively, depending on how you code, you could just look for spaces. For example, if you are looking for EMP and not EMPLOYEE, search for EMP with a blank space before and after it (assuming you code this way) or use something like this \bEMP\b This would work for me because I code the same way I write sentences - each work is separated by spaces.
    <blockquote> EMPLOYEE := EMP || ' - ID';</blockquote>
    In this example, seaching for EMP with a blank space after would find what I wanted. However, if you only search for a trailing blank, you may end up also finding words like tEMP. This is where have a blank space before each word is helpful.
    Other, more valuable expressions could be written to accomplish your task. Refer to the Forms Builder Online help for more information about using Expressions in the Find and Replace search window.

  • SQl%rowcount problem

    i have created a function, in pl/sql to check the diffrence betwenn 2 similar tables having same columns . if it has diffrence tha is if select query returns row then it shoul go in error log table but this is not happening could you solve my problem.
    thanks in advance
    CREATE OR REPLACE FUNCTION CAL_BETWEEN_TABLES(
    source_name IN VARCHAR2,
    target_name IN VARCHAR2)
    RETURN number
    IS
    v_success CONSTANT NUMBER := 0;
    v_failure CONSTANT NUMBER := -1;
    V_SQL VARCHAR2(10000);
    v_rows_processed NUMBER :=0;
    p_err_modul error_logs.err_modul%TYPE := 'NULL';
    p_err_function error_logs.err_function%TYPE := ' CAL_BETWEEN_TABLES';
    p_err_type error_logs.err_type%TYPE := 'SN';
    BEGIN
    V_SQL:= 'SELECT * from '||source_name||' minus select * from '||target_name||'';
    v_rows_processed:=sql%rowcount;
    IF v_rows_processed>0
    THEN
         p_err.insert_error(p_err_modul,
    p_err_function,
    p_err_type,
    SQLCODE,
    USER,
    'Failed in the function CAL_BETWEEN_TABLES' ||
    SQLERRM);
    RETURN(v_failure);
    --dbms_output.put_line('failure');
    ELSE
    RETURN(v_success);
    --dbms_output.put_line('success');
    END IF;
    END CAL_BETWEEN_TABLES;

    You are missing EXECUTE IMMEDIATE or it is cut and paste problem. also
    Declare variable
    v_row source_name%rowtype;
    V_SQL:= 'SELECT * from '||source_name||' minus select * from '||target_name||'';
    EXECUTE IMMEDIATE v_sql INTO v_row;
    v_rows_processed:=sql%rowcount;
    HTH

  • HANA equivalent of oracle's sql%rowcount to get affected rows.

    I want to get the number of rows affected from an insert or update statement inside a stored procedure.
    Is there any equivalent to oracle's sql%rowcount that can be called after the query.
    For example:
    create procedure procedure_name
    begin
         declare l_c integer;
         insert into table values ('somevalue');
         l_c := sql%rowcount; -- This would return 1 for the row inserted.
    end;

    Yes, after the INSERT statement....
    SELECT ::ROWCOUNT into L_C FROM DUMMY;
    Cheers,
    Rich Heilman

  • Backup for Sql Azure Database

    Hi,
    We are using the automatic backup option available with Sql Azure , but it is not working for us and throwing the some error.When we raised the ticket for same we had received reply saying this feature is not supported by MS.
    Could you please advise what is best approch for taking the backup for Sql Azure Database.
    Regards...

    Point in time restore is a great feature to sit along side automated export but not a replacement. Automated Export allows us to keep long term backups (30, 60, 90 days) which are important and also allows the ability to import to my local server a backup
    from a previous day for testing or support reasons.
    RedGate provide a backup service as well but I was hoping by using PIT restore and automated backup together I would no longer need to pay for that third-party service.

  • Replacement for ic_loct_inv in R12

    My client is currently runnning on 11i OPM. They are going for R12 upgrade now. We have lot of reports and PL/SQL programs using ic_loct_inv table in 11i. I would like to know the replacement for ic_loct_inv in R12 Discrete mfg.

    Hi,
    I have already gonr through the OPM release 12.1 migration documents provided in 376683.1.
    The metalink ID1098353.1 is somewhat helpful for me in analyzing the transactions duting upgrade phase. However, I still not able to get the exact replacement for ic_loct_inv in R12. Your assistance is really required and appreciated.
    Thanks

  • EWA via solman for SQL Server

    Hi,
    I'm trying to configure EWA via solman for an ECC 6.0 system on MS SQL 2005.
    I've completed all the necessary steps i.e. all the RFC's are working fine, RTCCTOLL is ok, the report is getting generated in SDCCN on the satellite system...however it doesnt seem to get transfered to solman.
    RSCOLL00 gives a weird "database system not supported " message although it shows finished successfully.
    Is there some other program for SQL Server or maybe something that I'm missing...
    Pls. help.
    Thanks a lot,
    Saba.

    Hi Saba,
    you can't schedule EWA for Solution manager sessions in SDCCN of the satellite. The only kind you can schedule in the satellite is the kind you send directly to SAP, that's why SDCC_OSS appears as the RFC connetion. You have to fetch EWA for Solution Manager sessions from the solution manager using a refresh sessions task. Does anything happen when you perform a refresh sessions task in SDCCN on the satellite (when you pick the RFC to Solution Manager as the destination)?
    You should also check that the service definitions are current in both satellite and solution manager systems. Note 727998 explains how to delete and replace them. (Often the only way of being totally sure that the service defintions are correct.)
    best regards,
    -David.
    Edited by: David Murphy  on Nov 20, 2008 4:26 PM

Maybe you are looking for