Prep for SQL and PL/SQL Oracle 9i exam

I'm currently preparing for the Oracle 9i PL/SQL exam. Has anyone attempted this and can recommend a study guide. I'm reading through the 8i student guide and I'm not sure if there was a mass of changes.

Hi,
you can download Oracle 9i doumentation
'Oracle9i PLSQL user guide and reference',
'Oracle9i Sql reference',
'Oracle9i Application developer guide - Fundamentals'
In these documents, refer to 'WHAT IS NEW' section this will be usualy next to preface section and before the first chapter. There will be summary of new features and link in that documentation.
Download canditate guide from oracle site compare the objectives with these new features prepare the needed for the exam.
Good Luck!

Similar Messages

  • Statement for Lock and Unlock an Oracle user (Urgent)

    Hi DBAs.
    I just want to know about the statement used for lock and unlock the Oracle user thru sysdba suer.
    Thanks
    Hassan

    Hey,
    SQL> alter user <USERNAME> account lock;
    and
    SQL> alter user <USERNAME> account unlock;
    Cheers,
    Marcello M.
    Sorry Justin, same answer........
    Message was edited by:
    Marcello M.

  • ASM Disk preparation for Datafiles and FRA in Oracle 10g RAC Inst

    Dear Friends,
    Please clarify wheteher the below method is correct to confiure ASM disks for Datafiles and FRA
    Partitions provided by IT team for OCR and Voting Disk
    /dev/sda1 - 150 GB (For +DATA)
    /dev/sda2 - 100 GB (For +FRA)
    OS     : RHEL 5.6 (64 Bit)
    kernel version = 2.6.18-238.el5
    Steps:(Node1)
    1) Install the RPM's for ASM
    rpm -Uvh oracleasm-support-2.1.7-1.el5.x86_64.rpm
    rpm -Uvh oracleasm-2.6.18-238.el5-2.0.5-1.el5.x86_64.rpm
    rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm
    2) Configure ASM
    /etc/init.d/oracleasm configure
    Default user to own the driver interface []: oracle
    Default group to own the driver interface []: dba
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]:
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver: [  OK  ]
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    3) Cretae ASM Disk
    /etc/init.d/oracleasm createdisk DISK1 /dev/sda1
    /etc/init.d/oracleasm createdisk DISK2 /dev/sda2
    4)/etc/init.d/oracleasm status
    5)/etc/init.d/oracleasm scandisks
    6)/etc/init.d/oracleasm listdisks
    7) Nothing to perform on Node2
    8) In dbca choose ASM and map the DISK1 for datafiles and DISK2 for FRA
    Please confirm the above steps are right?if not please clarify
    If DBCA ->ASM doesn't discover my disk then what should be the Discovery path i have to give?
    Please refer any document / Metalink ID for the above complete process
    Can i have ASM and oracle DB binary in the same home
    Regards,
    DB

    user564706 wrote:
    If DBCA ->ASM doesn't discover my disk then what should be the Discovery path i have to give?for asm disk created with oracleasm discovery path variable is ORCL:*
    Please refer any document / Metalink ID for the above complete processhttp://docs.oracle.com/cd/B19306_01/install.102/b14203/storage.htm#BABIFHAB
    Can i have ASM and oracle DB binary in the same homeyes. unless you want job role seperation or plan to run multiple versions of oracle homes
    >
    Regards,
    DB

  • Select statement help for sql/oracle newbie

    I have a db for a fake airline. i have a route table that has columns for "FIRST_CLASS_FARE", "COACH_FARE", and "ECONOMY_FARE". I then have a flight table that references the route#, then a trip table that references the flight#. I also have a passenger table and a reservation table that has the passenger # in it. The reservation table also has a column called "reservation class" that has a 'f' for first class fare, 'c' for coach fare, 'e' for economy fare.
    What I am trying to do is create a fill for each individual passenger that shows the total amount they spent on all of their reservations.
    Any suggestions on how to join the tables to create this bill?
    Thanks in advance!

    Is it the way the passenger table has columns and is populated,
    for a passenger A, takes a route - 1 with First class fare - $2500 on flight- B123, to a trip - t1 and reserved for class - 'f'
    If this is the way, then you get the bill from the fare column

  • Manual for Installation and Configuration of Oracle BI Apps

    Hi,
    I am trying to install and configure Oracle BI Apps, for making a demo to a customer who needs to see how it works OBIA, because is my first time configuring this, I don't know if I am doing well.
    So I need if someone have a manual to configure this correctly, but no Oracle manuals because I have it, I need some more detailed, may be if someone can help me with a document that you have made for a specific configuration in some customer, I really apreciate if someone can help me with this.
    Regards,
    Arnulfo

    Hi,
    I am trying to install and configure Oracle BI Apps, for making a demo to a customer who needs to see how it works OBIA, because is my first time configuring this, I don't know if I am doing well.
    So I need if someone have a manual to configure this correctly, but no Oracle manuals because I have it, I need some more detailed, may be if someone can help me with a document that you have made for a specific configuration in some customer, I really apreciate if someone can help me with this.
    Regards,
    Arnulfo

  • Experiences with Oracle 10g/11g RAC on VMWare Fusion for Macbook and OEL 5

    Folks,
    I recently purchased a new Macbook Pro laptop computer to use for demos and testing with Oracle RAC. I am using VMWare Fusion for Mac OS X to setup new VM machines with Oracle 11g.
    Thus far, I can setup single instance Oracle 11g database fine with OEL 5 (32 bit OS) as the guest OS. The problem that I keep running into due to the limitation with VMWare Fusion product is that I am not able to edit the VMWare network settings to allow multiple VM machines to communicate with each other. The second main issue that I currently face is that VMWare Fusion does not allow you to setup shared disks which are a requirement to install and configure an Oracle RAC environment. I came to the realization that after searching forums online that
    one must setup a third VM machine to act as an NFS or iSCSI filer system to present to the other VM hosts for the shared storage requirement for RAC. Has anyone been able to do all
    of this successfully with VMWare Fusion on a Macbook ? If so, I would definitely be interested in finding out how you did this. Specifically the following:
    1. Network configuration for each VMWare Fusion guest OS machine
    2. Shared storage ie) NFS or OpenFiler VM machine
    Regards,
    Ben

    Hi,
    I'm facing the same issue.. If your issue is fixed..could you please let me know?
    I'm trying to configure 11g RAC with OPenfiler and got stuck here.
    Regards,
    Kumar

  • CDC feature for snapshots and views

    Hi,
    Can we use the Oracle Change Data Capture (CDC) feature which can be used with Oracle's Publish and Subscribe packages for views and snapshots in Oracle 10g? Or is it only for tables?

    This is question for Oracle support as DI is just using it. And they will tell you that it is for tables only.

  • Weblogic 10.3.3 and Forms servers. Environments for test and developments.

    Hi. I could use a little advice here.
    We have a server for development and test of Oracle Forms on Weblogic 10.3.3
    I have installed default configuration with a ADMIN server, an WLS_FORMS managed server (and WLS_REPORTS) in same Domain.
    I wants to have two new stand-alone managed forms-server in the domain: FORMS_DEVELOP:9010 and FORMS_TEST:9011, which use forms from different homes (it redhat linux).
    How do I configer this? In the Enterprise manager there is only one entry for /forms. How do I configure more managed forms servers so they point to different locations in the filesystem (/home/develop/forms and /home/test/forms ). Is i possible in same domain ?
    Kind regards
    Henrik S.

    Hi Again
    Thanks. I am a newbee to this even though I have just been to the first classroom for weblogic...
    But I need some more details to get through ...
    Do I first have to create a domain template ? Or a extension template ? It is not clear to me what to select of the many components in that case.
    Kind regards
    Henrik S.

  • Oracle equivalent of SQL Server's "FOR XML" and "OPENXML"

    Hi
    Can someone please tell what are the Oracle's equivalent of SQL Server's "FOR XML" and "OPENXML" features?

    Probably you can try General XML forum General XML
    Gints Plivna
    http://www.gplivna.eu

  • How to configure SharePoint 2010 / 2013 Search for SQL Database Contents and Oracle Database Contents?

    Hi All,
    We are planning to maintain the contents in SQL / Oracle. Could you please suggest anyone which is best for SharePoint 2010 / 2013 Search. How to configure the search for external content source?
    Thanks & Regards,
    Prakash

    This link explains supported and non supported scenarios to use Oracle for BCS
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/453a3a05-bc50-45d0-8be8-cbb4e7fe7027/oracle-db-as-external-content-type-in-sharepoint-2013
    And here is more on it
    http://msdn.microsoft.com/en-us/library/ff464424%28office.14%29.aspx 
    And here how you can connect Oracle to SharePoint for BCS functionality
    http://lightningtools.com/bcs/business-connectivity-services-in-sharepoint-2013-and-oracle-using-meta-man/
    Overall it seems SQL doenn't require any special arrangement to connect BCS to SharePoint.
    Regards,
    Pratik Vyas | SharePoint Consultant |
    http://sharepointpratik.blogspot.com
    Posting is provided AS IS with no warranties, and confers no rights
    Please remember to click Mark As Answer if a post solves your problem or
    Vote As Helpful if it was useful.

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • GeoRaptor 3.0 for SQL Developer 3.0 and 2.1 has now been released

    Folks,
    I am pleased to announce that, after 5 months of development and testing, a new release of GeoRaptor for SQL Developer 2.1 and 3.0 is now available.
    GeoRaptor for SQL Developer 3 is available via the SQL Developer Update centre. GeoRaptor 3 for SQL Developer 2.1 is being made available
    via a download fro the GeoRaptor website.
    No release notes have been compiled as the principal developer (oops, that's me!) is currently busy doing real work for a change (another 3 weeks), earning a living
    and keeping the wolves at bay. More extensive notes (with images) will be compiled when I get back. (Unless anyone is offering! See next.)
    We are still looking for people to:
    1. Provide translations of the English dialog menus etc.
    2. Write more extensive user documentation. If you use a particular part of GeoRaptor a lot and think
    you have found out all its functionality and quirks, contact us and offer to write a few pages of
    documentation on it. (Open Office or Microsoft Word is fine.) Easiest way to do this is to simply
    make screen captures and annotate with text.
    3. Conduct beta testing.
    Here are the things that are in the new release.
    New functionality:
    Overhaul of Validation Functionality.
    1. User can specify own validation SELECT SQL as long as it returns three required columns. The SQL is thus totally editable.
    2. Validation update code now allows user to associate a PL/SQL function with an error number which is applied in the UPDATE SQL.
    3. UPDATE SQL can use WHERE clause of validation SELECT SQL (1) to update specific errors.
       NOTE: The generated UPDATE statement can be manually edited. It is NEVER run by GeoRaptor. To run any UPDATE, copy the statement
       to the clipboard and run in an appropriate SQL Worksheet session within SQL Developer.
    4. Main validation table allows:
       a. Sorting (click on column header) and
       b. Filtering.
       c. Copying to Clipboard via right mouse click sub menu of:
          - Geometry's SDO_ELEM_INFO array constructor.
          - SDO_GEOMETRY constructor
          - Error + validation string.
       d. Access to Draw/Zoom functions which were previously buttons.
       e. Added a new right mouse click menu "Show Feature's Individual Errors" that gathers up all the errors
          it can process - along with the ring / element that is host to the error (if it can) - and displays
          them in the Attribute/Geometry tabs at the bottom of the Map Window (where "Identify" places its results).
          The power of this will be evident to all those who have wanted a way of stepping through errors in a geometry.
       f. Selected rows can now be deleted (select rows: press <DELETE> key or right mouse click>Delete).
       g. Table now has only one primary key column, and has a separate error column holding the actual error code.
       h. Right mouse click men added to table menu to display description of error in the new column (drawn from Oracle documentation)
       i. Optimisations added to improve performance for large error lists.
    5. Functionality now has its own validation layer that is automatically added to the correct view.
       Access to layer properties via button on validation dialog or via normal right mouse click in view/layer tree.
    Improved Rendering Options.
    1. Linestring colour can now be random or drawn from column in database (as per Fill and Point colouring)
    2. Marking of SDO_GEOMETRY objects overhauled.
       - Ability to mark or LABEL vertices/points of all SDO_GEOMETRY types with coordinate identifier and
         option {X,Y} location. Access is via Labelling tab in layer>properties. Thus, coordinate 25 of a linestring
         could be shown as: <25> or {x,y} or <25> {x,y}
       - There is a nice "stacked" option where the coordinate {x,y} can be written one line below the id.
       - For linestrings and polygons the <id> {x,y} label can be oriented to the angle between the vectors or
         edges that come in, and go out of, a vertex. Access is via "Orient" tick box in Labelling tab.
       - Uses Tools>Preferences>GeoRaptor>Visualisation>SDO_ORDINATE_ARRAY bracket around x,y string.
    3. Start point of linestring/polygon and all other vertices can be marked with user selectable point marker
       rather than previously fixed markers.
    4. Can now set a NULL point marker by selecting "None" for point marker style pulldown menu.
    5. Positioning of the arrow for linestring/polygons has extra options:
       * NONE
       * START    - All segments of a line have the arrow positioned at the start
       * MIDDLE   - All segments of a line have the arrow positioning in the middle.
       * END      - All segments of a line have the arrow positioning in the END.
       * END_ONLY - Only the last segment has an arrow and at its end.
    ScaleBar.
    1. A new graphic ScaleBar option has been added for the map of each view.
       For geographic/geodetic SRIDs distances are currently shown in meters;
       For all SRIDs an attempt is made to "adapt" the scaleBar units depending
       on the zoom level. So, if you zoom right in you might get the distance shown
       as mm, and as you zoom out, cm/m/km as appropriate.
    2. As the scaleBar is drawn, a 1:<DEMONINATOR> style MapScale value is written
       to the map's right most status bar element.
    3. ScaleBar and MapScale can be turned off/on in View>Properties right mouse
       click menu.
    Export Capabilities.
    1. The ability to export a selection from a result set table (ie result of
       executing ad-hoc SQL SELECT statement to GML, KML, SHP/TAB (TAB file
       adds TAB file "wrapper" over SHP) has been added.
    2. Ability to export table/view/materialised view to GML, KML, SHP/TAB also
       added. If no attributes are selected when exporting to a SHP/TAB file, GeoRaptor
       automatically adds a field that holds a unique row number.
    3. When exporting to KML:
       * one can optionally export attributes.
       * Web sensitive characters < > & etc for KML export are replaced with &gt; &lt; &amp; etc.
       * If a column in the SELECTION or table/view/Mview equals "name" then its value is
         written to the KML tag <name> and not to the list of associated attributes.
         - Similarly for "description" -> <description> AND "styleUrl" -> <styleUrl>
    4. When exporting to GML one can optionally export attributes in FME or OGR "flavour".
    5. Exporting Measured SDO_GEOMETRY objects to SHP not supported until missing functionality
       in GeoTools is corrected (working with GeoTools community to fix).
    6. Writing PRJ and MapInfo CoordSys is done by pasting a string into appropriate export dialog box.
       Last value pasted is remembered between sessions which is useful for users who work with a single SRID.
    7. Export directory is remembered between sessions in case a user uses a standard export directory.
    8. Result sets containing MDSYS.SDO_POINT and/or MDSYS.VERTEX_TYPE can also be written to GML/KML/SHP/TAB.
       Example:
       SELECT a.geom.sdo_point as point
         FROM (SELECT sdo_geometry(2002,null,sdo_point_type(1,2,null),sdo_elem_info_array(1,2,1),sdo_ordinate_array(1,1,2,2)) as geom
                 FROM DUAL) a;
       SELECT mdsys.vertex_type(a.x,a.y,a.z,a.w,a.v5,a.v6,a.v7,a.v8,a.v9,a.v10,a.v11,a.id) as vertex
         FROM TABLE(mdsys.sdo_util.getVertices(mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(1,1,2,2)))) a;
    9. A dialog appears at the end of each export which details (eg total) what was exported when the exported recordset/table contains more
       than on shape type. For example, if you export only points eg 2001/3001 from a table that also contains multipoints eg 2005/3005 then
       the number of points exported, and multipoints skipped will be displayed.
    10. SHP/TAB export is "transactional". If you set the commit interval to 100 then only 100 records are held in memory before writing.
        However, this does not currently apply to the associated DBASE records.
    11. SHP/TAB export supports dBase III, dBase III + Memo, dBase IV and dBase IV + Memo.
        Note: Memo allows text columns > 255 characters to be exported. Non-Memo formats do not and any varchar2 columns will be truncated
        to 255 chars. Some GIS packages support MEMO eg Manifold GIS, some do not.
    12. Note. GeoRaptor does not ensure that the SRID of SDO_GEOMETRY data exported to KML is in the correct Google Projection.
        Please read the Oracle documentation on how to project your data is this is necessary. An example is:
        SELECT OBJECTID,
               CODIGO as name,
               NOME as description,
               MI_STYLE,
               SDO_CS.TRANSFORM(shape,'USE_SPHERICAL',4055) as shape
          FROM MUB.REGIONAL;
    13. NOTE: The SHP exporter uses the Java Topology Suite (JTS) to convert from SDO_GEOMETRY to the ESRI Shape format. JTS does not handle
        circular curves in SDO_GEOMETRY objects you must "stroke" them using sdo_util.arc_densify(). See the Oracle documentation on how
        to use this.
    Miscellaneous.
    1. Selection View - Measurement has been modified so that the final result only shows those geometry
       types that were actually measured.
    2. In Layer Properties the Miscellaneous tab has been removed because the only elements in it were the
       Geometry Output options which have now been replaced by the new GML/KML/etc export capabilities.
    3. Shapefile import's user entered tablename now checked for Oracle naming convention compliance.
    4. Identify based on SDO_NN has been removed from GeoRaptor given the myriad problems that it seems to create across versions
       and partitioned/non-partitioned tables. Instead SDO_WITHIN_DISTANCE is now used with the actual search distance (see circle
       in map display): everything within that distance is returned.
    5. Displaying/Not displaying embedded sdo_point in line/polygon (Jamie Keene), is now controlled by
       a preference.
    6. New View Menu options to switch all layers on/off
    7. Tools/Preferences/GeoRaptor layout has been improved.
    8. If Identify is called on a geometry a new right mouse click menu entry has been added called "Mark" which
       has two sub-menus called ID and ID(X,Y) that will add the labeling to the selected geometry independently of
       what the layer is set to being.
    9. Two new methods for rendering an SDO_GEOMETRY object in a table or SQL recordset have been added: a) Show geometry as ICON
       and b) Show geometry as THUMBNAIL. When the latter is chosen, the actual geometry is shown in an image _inside_ the row/column cell it occupies.
       In addition, the existing textual methods for visualisation: WKT, KML, GML etc have been collected together with ICON and THUMBNAIL in a new
       right mouse click menu.
    10. Tables/Views/MViews without spatial indexes can now be added to a Spatial View. To stop large tables from killing rendering, a new preference
        has been added "Table Count Limit" (default 1,000) which controls how many geometry records can be displayed. A table without a spatial
        index will have its layer name rendered in Italics and will write a warning message in red to the status bar for each redraw. Adding an index
        which the layer exists will be recognised by GeoRaptor during drawing and switch the layer across to normal rendering.
    Some Bug Fixes.
    * Error in manage metadata related to getting metadata across all schemas
    * Bug with no display of rowid in Identify results fixed;
    * Some fixes relating to where clause application in geometry validation.
    * Fixes bug with scrollbars on view/layer tree not working.
    * Problem with the spatial networks fixed. Actions for spatial networks can now only be done in the
      schema of the current user, as it could happen that a user opens the tree for another schema that
      has the same network as in the user's schema. Dropping a drops only the network of the current connected user.
    * Recordset "find sdo_geometry cell" code has been modified so that it now appears only if a suitable geometry object is
      in a recordset.  Please note that there is a bug in SQL Developer (2.1 and 3.0) that causes SQL Developer to not
      register a change in selection from a single cell to a whole row when one left clicks at the left-most "row number"
      column that is not part of the SELECT statements user columns, as a short cut to selecting a whole row.  It appears
      that this is a SQL Developer bug so nothing can be done about it until it is fixed. To select a whole row, select all
      cells in the row.
    * Copy to clipboard of SDO_GEOMETRY with M and Z values forgot has extraneous "," at the end.
    * Column based colouring of markers fixed
    * Bunch of performance improvements.
    * Plus (happily) others that I can't remember!If you find any bugs register a bug report at our website.
    If you want to help with testing, contact us at our website.
    My thanks for help in this release to:
    1. John O'Toole
    2. Holger Labe
    3. Sandro Costa
    4. Marco Giana
    5. Luc van Linden
    6. Pieter Minnaar
    7. Warwick Wilson
    8. Jody Garnett (GeoTools bug issues)
    Finally, when at the Washington User Conference I explained the willingness of the GeoRaptor Team to work
    for some sort of integration of our "product" with the new Spatial extension that has just been released in SQL
    Developer 3.0. Nothing much has come of that initial contact and I hope more will come of it.
    In the end, it is you, the real users who should and will decide the way forward. If you have ideas, wishes etc,
    please contact the GeoRaptor team via our SourceForge website, or start a "wishlist" thread on this forum
    expressing ideas for future functionality and integration opportunities.
    regards
    Simon
    Edited by: sgreener on Jun 12, 2011 2:15 PM

    Thank you for this.
    I have been messing around with this last few days, and i really love the feature to pinpoint the validation errors on map.
    I has always been so annoying to try pinpoint these errors using some other GIS software while doing your sql.
    I have stumbled to few bugs:
    1. In "Validate geometry column" dialog checking option "Use DimInfo" actually still uses value entered in tolerance text box.
    I found this because in my language settings , is the decimal operators
    2. In "Validate geometry column" dialog textboxs showing sql, doesn't always show everything from long lines of text (clipping text from right)
    3. In "Validate geometry column" dialog the "Create Update SQL" has few bugs:
    - if you have selected multiple rows from results and check the "Use Selected Geometries" the generated IN-clause in SQL with have same rowid (rowid for first selected result) for all entries
    Also the other generated IN clause in WHERE-clause is missing separator if you select more than one corrective function
    4. "Validate geometry column" dialog stays annoyingly top most when using "Create Update SQL" dialog

  • Extract the data from SQL Server and Import into Oracle

    Hi,
    I would like to run a daily job that will export the table data from SQL server table (it will be only one or two table) and Import back into Oracle table (it might one or two table tables).
    Could you please guide me that how can i do this using either sql server or oracle?
    We have oracle 9.2 and sql server 2005.
    Normally i do from flat file which is generated by source destination nand i dump into oracle using sql*loader but this time I have to directly extract/export the data from MS Sql server and load into Oracle table, mostly it will reload so i might doing any massaging data during the load.
    If you show me the detail approach, it will be really appreciated.
    I have access to Sql server but i don't how to use sql server to do this or using oracle as a daily job even becuase have to schedule the job for this as it will be a daily job.
    Thanks,
    poratips

    Unless you can find an open source ODBC driver for SQL Server that runs on Solaris (and I wouldn't be overly hopeful there) Heterogeneous Services would require that you license something-- a third party ODBC driver, a new Oracle instance, or an Oracle Transparent Gateway.
    As I stated below, you could certainly use SQL Server's ETL tool, DTS. Oracle's ETL tools would require additional licensing since you're just on 9.2. You could also write a small application (Java or otherwise) that connected to both databases and transferred the data. If you're particularly enterprising, you could load the SQL Server Type 4 JDBC driver into Oracle's JVM and write a Java stored procedure that connected to the SQL Server database via JDBC, but that's a pretty convoluted approach.
    Justin

  • How to integrate from MS SQL SERVER 2005 and Flatfile to Oracle 10g.

    Hi
    I am new to ODI. I am trying to load sample data from MS SQL Server 2005 and Flatfile to Oracle 10g.
    1. I have created three models.
    1-1. SQL2005 (SRC_CUSTOMER table)
    1-2. Flatfile (SRC_AGE_GROUP.txt & SRC_SALES_PERSON.txt)
    1-3. Oracle 10g (TRG_CUSTOMER table)
    You may know I got those environments from the ODI DEMO environment.
    2. I could able to reverse the tables also.
    3. I have created an interface which contains source table (from MSSQL 2005), Flatfile and target table from ORACLE model.
    4. I have imported the knowledge modules. But I am confusing in selecting the knowledge modules to source and target tables.
    I've selected LKM File to SQL for flatfile model.
    I've also selected LKM SQL to SQL for MSSQL 2005 model and IKM Oracle Incremental Update for the target table (ORACLE).
    I've also implemented the interface that I created. It worked without errors. But there is no data in target table which is TRG_CUSTOMER.
    I really would like to know what happened and what the problems are.
    You can email me [email protected]
    Thanks in advance
    Jason Lee

    what did give for SRC_AGE_GROUP SRC_CUSTOMER join condition
    if it is
    (SRC_CUSTOMER.AGE=SRC_AGE_GROUP.AGE_MIN) AND SRC_CUSTOMER.AGE=SRC_AGE_GROUP.AGE_MAX
    give it as
    (SRC_CUSTOMER.AGE>SRC_AGE_GROUP.AGE_MIN) AND SRC_CUSTOMER.AGE<SRC_AGE_GROUP.AGE_MAX

Maybe you are looking for

  • Can't Save Application State in Acrobat X Pro

    There once was a time that I could close Acrobat X Pro with various and sundry pdfs open and upon reopening the application, those selfsame documents would reappear. This is no longer the case, and I have no idea what has changed. I am using Acrobat

  • Slideshow: show image description in caption?

    Hi All, is there a way to include an image-description trough Meta-Data when using the Slideshow-Feature and showing "full" captions? I tried entering all sorts of descriptions through File-Info but nothing of that appears in the caption. Did I miss

  • Restful Webservice custom connector

    Hi All, I would like to know the best practice to create custom connector for Restful web service methods.Currently we are following simple java code which connects to service provider and pass the JSON object and perform the required actions. Is it

  • Setting up the class root directory and choosing class files.

    I made a simple test application as it is proposed at the J2EE 1.4 Tutorial and all worked. (Chapter 24 Getting started with Enterprise Beans) Than I deleted the ear file to try out the deploy mechanism again. And after generating the new application

  • Oracle Weblogic 11g + HP UX SPARC (64)

    Hello, I have a little problem with running weblogic server on HP UX machine. All installation process passed withou any problems, but when I was trying to run server, the following exception appeared: <Apr 1, 2010 5:42:11 PM CEST> <Error> <Security>