SQL/XML performance and xdbcore-parameter

Hello,
I try to figure out, what the best method to retrieve large XML-Documents is.
I compare performance and resources needed for: SQL/XML, DBMS_XMLGEN, SYS_XMLGEN and XSU.
With a fixed set of 1000 rows I created XML of about 32 MB.
I am on 10.2.0.4 on Linux
XML is not schema based.
SQL*Plus: Release 10.1.0.4.2 - Production on Fri Aug 21 13:35:42 2009
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Data Mining and Real Application Testing optionsWhat I see, is that SQL/XML is the slowest and needs huge PGA amount (160 Mb and seems to increase linearly, for 5000 rows I've measured ~800 Mb).
As I've read here, memory issues with SQL/XML should have been solved in 9.2.0.3 (?)
Is this the case? Or is this PGA amount ok?
I would like also to test the impact of xdbcore-xobmem-bound and xdbcore-loadableunit-size parameters.
But I cannot observe any impact. Must I restart the database everytime after change of xdbconfig.xml?
But also after increasing from 1024 16 to resp. 10240 5120 and restart the same speed and memory consumption are in place.
Am I doing something wrong?

SQL*Loader: Release 11.2.0.2.0 - Production on H. Febr. 11 18:57:09 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Control File: prodxml.ctl
Character Set UTF8 specified for all input.
Data File: test.xml
Bad File: test.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 5000
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table PRODUCT_XML, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
COL_ID FIRST 1000 CHARACTER
(FILLER FIELD)
IN_FILE DERIVED * EOF CHARACTER
Static LOBFILE. Filename is bv_test.xml
Character Set UTF8 specified for all input.
SQL*Loader-605: Non-data dependent ORACLE error occurred -- load discontinued.
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
Table PRODUCT_XML:
0 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 256 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records rejected: 0
Total logical records discarded: 0
Run began on H. Febr. 11 18:57:09 2013
Run ended on H. Febr. 11 19:20:54 2013
Elapsed time was: 00:23:45.76
CPU time was: 00:05:05.50
this is the log
i have truncated everything i am not able to load 400 mega into 4 giga i cannot understand
windows is not licensed 32 bit

Similar Messages

  • Export Query SQL multi-linee in XML format and Import XML in SAP

    Good morning
    Someone know if is possible after Export Query SQL on OITT and ITT1 with thousands rows in XML with values already change thru the query.
    Then import the XML without using loop in SAP with the above code vb.net
    oDistinta = DirectCast(m_oCompany.GetBusinessObjectFromXML(xmlFile, 0), SAPbobsCOM.ProductTrees)
                    Dim errCode As Integer = 0
                    Dim strErrore As String = ""
                    Dim iErrore As Integer = 0
                    oDistinta.Browser.ReadXml(xmlFile, 0)
                    errCode = oDistinta.Update
                    m_oCompany.GetLastError(iErrore, strErrore)
                    'on error
                    If iErrore <> 0 Then
    To make easy the program i post with 2 ProductTrees item. The 2 item already exist in OITT e ITT1.
    When i update the 1st item is modify but the 2nd not change.
    Here is the XML
    <?xml version="1.0" encoding="UTF-16"?>
    <BOM>
      <BO>
        <AdmInfo>
          <Object>66</Object>
          <Version>2</Version>
        </AdmInfo>
        <ProductTrees>
          <row>
            <TreeCode>V9998</TreeCode>
            <TreeType>iProductionTree</TreeType>
            <Quantity>1000.000000</Quantity>
            <U_SHI_MZDA>0</U_SHI_MZDA>
            <U_SHI_PUMA>0</U_SHI_PUMA>
            <U_SHI_PUUO>0</U_SHI_PUUO>
          </row>
        </ProductTrees>
        <ProductTrees_Lines>
          <row>
            <ItemCode>1047</ItemCode>
            <Quantity>5.000000</Quantity>
            <Warehouse>Mc</Warehouse>
            <Price>0.000000</Price>
            <Currency>
            </Currency>
            <IssueMethod>im_Backflush</IssueMethod>
            <Comment>
            </Comment>
            <ParentItem>V9998</ParentItem>
            <PriceList>2</PriceList>
            <DistributionRule>
            </DistributionRule>
            <U_IDE_POS>10</U_IDE_POS>
            <U_IDE_FASE>0</U_IDE_FASE>
            <U_IDE_OPER>10</U_IDE_OPER>
            <U_SHI_UndInv>N</U_SHI_UndInv>
            <U_SHI_Pri1>
            </U_SHI_Pri1>
            <U_SHI_Pri2>
            </U_SHI_Pri2>
            <U_SHI_Pri3>
            </U_SHI_Pri3>
            <U_SHI_Prgm>
            </U_SHI_Prgm>
            <U_SHI_Tim2>0.000000</U_SHI_Tim2>
            <U_SHI_Tim3>0.000000</U_SHI_Tim3>
            <U_SHI_DexNACQ>
            </U_SHI_DexNACQ>
            <U_SHI_BP>
            </U_SHI_BP>
            <U_SHI_PrcKit>0.000000</U_SHI_PrcKit>
            <U_SHI_ItmNACQ>
            </U_SHI_ItmNACQ>
            <U_SHI_LineOk>N</U_SHI_LineOk>
          </row>
        </ProductTrees_Lines>
        <ProductTrees>
          <row>
            <TreeCode>V9999</TreeCode>
            <TreeType>iProductionTree</TreeType>
            <Quantity>1000.000000</Quantity>
            <U_SHI_MZDA>0</U_SHI_MZDA>
            <U_SHI_PUMA>0</U_SHI_PUMA>
            <U_SHI_PUUO>0</U_SHI_PUUO>
          </row>
        </ProductTrees>
        <ProductTrees_Lines>
          <row>
            <ItemCode>1015</ItemCode>
            <Quantity>1000.000000</Quantity>
            <Warehouse>Mc</Warehouse>
            <Price>0.000000</Price>
            <Currency>
            </Currency>
            <IssueMethod>im_Backflush</IssueMethod>
            <Comment>
            </Comment>
            <ParentItem>V9999</ParentItem>
            <PriceList>5</PriceList>
            <DistributionRule>
            </DistributionRule>
            <U_IDE_POS>30</U_IDE_POS>
            <U_IDE_FASE>0</U_IDE_FASE>
            <U_IDE_OPER>30</U_IDE_OPER>
            <U_SHI_UndInv>N</U_SHI_UndInv>
            <U_SHI_Pri1>
            </U_SHI_Pri1>
            <U_SHI_Pri2>
            </U_SHI_Pri2>
            <U_SHI_Pri3>
            </U_SHI_Pri3>
            <U_SHI_Prgm>
            </U_SHI_Prgm>
            <U_SHI_Tim2>0.000000</U_SHI_Tim2>
            <U_SHI_Tim3>0.000000</U_SHI_Tim3>
            <U_SHI_DexNACQ>
            </U_SHI_DexNACQ>
            <U_SHI_BP>
            </U_SHI_BP>
            <U_SHI_PrcKit>0.000000</U_SHI_PrcKit>
            <U_SHI_ItmNACQ>
            </U_SHI_ItmNACQ>
            <U_SHI_LineOk>N</U_SHI_LineOk>
          </row>
        </ProductTrees_Lines>
      </BO>
    </BOM>
    Someone know the reason ?
    Thanks in advance.
    Regards

    Hi Gabriele,
    The issue is that the ReadXml method takes an index parameter which is the number of the object that you want to read from the XML data. In your code you are specifying index 0 which is the first product tree in your file so when you call the Update method this is the only record that will be updated. It is not possible to do a mass update using the DI API in this way, you must always load each object separately. You can use the GetXMLelementCount method of the Company object to find out how many product tree objects are in your file and then use the ReadXML method to load each one by incrementing the index parameter in a loop.
    Kind Regards,
    Owen

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • Custom action with XML type input and output parameter.

    Hi,
    I want to develop custom action with xml type input and/or output parameter.
    Is there sample code for java side. How is the definition of input and/or output parameter and set/get methods?
    does it need special .jar file to develop custom action like this?
    Thanks.

    Cemil - yes, you can use XML data types.  Use the class
    com.sap.lhcommon.xml.XMLDataType
    for your parameter type.  Here is a snippet from a custom action we use to log XML (instead of just returning the #text node like the default logger does):
    public class XMLLogger extends ActionReflectionBase
        private String source;
        private String eventType;
        private String textMessage;
        private XMLDataType xmlMessage;
        public XMLLogger()
            log = new Logger("UserLog");
            source = DEFAULT_SOURCE;
            eventType = TYPE_INFO;
            textMessage = "";
            xmlMessage = new XMLDataType();
        public XMLDataType getXmlMessage()
            return xmlMessage;
        public void setXmlMessage(XMLDataType xmlMessage)
            this.xmlMessage = xmlMessage;
        public void Invoke(Transaction transaction, ILog ilog)
            StringBuffer sb = new StringBuffer();
            sb.append('[');
            sb.append(source);
            sb.append("] ");
            sb.append(textMessage);
            sb.append(XMLUtils.convertXmlToString(xmlMessage));
    XMLUtils is a helper class we wrote - it's just a bunch of standard Java XML boilerplate code.  The important part you need to know is XMLDataType.getDocument() will return an org.w3c.dom.Document.
    I hope that was enough information to help.
    -tim

  • Printing parameter values in xml report, and displaying it in header of the

    Hi All,
    Can anybody give me an idea to print the parameter values in the xml report.
    My requirement is like, I have 10 parameters in report builder, I have generated the xml file and created the .rtf template and finally registerted in oracle apps. But now, the requirement is to print the parameter values in the report output.I don't see any xml tags in the xml output.
    Any suggestion will be appreciated.
    Thanks in advance.

    I think all the XML attributes can contain lexicals. If you bring up the property palette against the report object you can just set the following:
    XML Tag Attributes: myParameter="&<P_1>"
    where P_1 is your user parameter.

  • Pass parameter through standard page "import xml content and actions"

    Dear Portal experts,
    As you know, a standard portal page is configured to allow the import of XML file. It is located in the portal in the path system admin -> transport -> xml content and actions -> import
    I configured a quick link u201Cimportu201D to the page so now the link http://myserver:port/irj/portal/import allows me to access directly to this screen.
    What I would like to do now is to pass the parameter related to u201Cfile nameu201D through this screen meaning that I would like that http://myserver:port/irj/portal/ import/filename=C:\test.xml  fills automatically the fields u201CXML fileu201D with  C:\test.xml 
    Please do you know how to achieve this ?
    Thank you very much and regards

    hi,
    @Maksim :
    When user launch an URL like http://myserver:port/irj/portal/ import/*filename=C:\test.xml, he is requested to provide username and password; as this link is a shortcut for the upload xml page, there is a check of authorization and only portal admin that have authorization and permission on this page/iview will be able to upload xml file through the URL. Hope this clarify.
    @Kumar :
    thank you for your answer. What a pity to not be able to pass parameter through standard portal page/iview particularly knowing that we can pass paramater through many kinds of iviews (transactional iview, url iview, VC iview...). I explored some option on file com.sap.portal.ivs.init.par but this was unsuccessful. However, thanks to this [link bellow|http://wiki.sdn.sap.com/wiki/display/Snippets/ComponenttouploadXMLfilewithPCD+objects], I was able to upload through URL the XML file.
    The problem is that I can upload only XML file that are stored in the server not in the local user computer. My requirement is to be able to upload local XML file stored on user's computer.
    If someone could provide and idea/solution, I would be very grateful.
    Cheers

  • SQL Performance and Security

    Help needed here please. I am new to this concept and i am working on a tutorial based on SQL performance and security. I have worked my head round this but now i am stuck.
    Here is the questions:
    1. Analyse possible performance problems, and suggest solutions for each of the following transactions against the database
    a) A manager of a project needs to inspect total planned and actual hours spent on a project broken down by activity.
    e.g     
    Project: xxxxxxxxxxxxxx
    Activity Code          planned     actual (to date)
         1          20          25
         2          30          30
         3          40          24
    Total               300          200
    Note that actual time spent on an activity must be calculated from the WORK UNIT table.
    b)On several lists (e.g. list or combo boxes) in the on-line system it is necessary to identify completed, current, or future projects.
    2. Security: Justify and implement solutions at the server that meet the following security requirements
    (i)Only members of the Corporate Strategy Department (which is an organisation unit) should be able to enter, update and delete data in the project table. All users should be able to read this information.
    (ii)Employees should only be able to read information from the project table (excluding the budget) for projects they are assigned to.
    (iii)Only the manager of a project should be able to update (insert, update, delete) any non-key information in the project table relating to that project.
    Here is the project tables
    set echo on
    * Changes
    * 4.10.00
    * manager of employee on a project included in the employee on project table
    * activity table now has compound key, based on ID dependence between project
    * and activity
    drop table org_unit cascade constraints;
    drop table project cascade constraints;
    drop table employee cascade constraints;
    drop table employee_on_project cascade constraints;
    drop table employee_on_activity cascade constraints;
    drop table activity cascade constraints;
    drop table activity_order cascade constraints;
    drop table work_unit cascade constraints;
    * org_unit
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE org_unit
    ou_id               NUMBER(4)      CONSTRAINT ou_pk PRIMARY KEY,
    ou_name          VARCHAR2(40)     CONSTRAINT ou_name_uq UNIQUE
                             CONSTRAINT ou_name_nn NOT NULL,
    ou_type          VARCHAR2(30) CONSTRAINT ou_type_nn NOT NULL,
    ou_parent_org_id     NUMBER(4)     CONSTRAINT ou_parent_org_unit_fk
                             REFERENCES org_unit
    * project
    CREATE TABLE project
    proj_id          NUMBER(5)     CONSTRAINT project_pk PRIMARY KEY,
    proj_name          VARCHAR2(40)     CONSTRAINT proj_name_uq UNIQUE
                             CONSTRAINT proj_name_nn NOT NULL,
    proj_budget          NUMBER(8,2)     CONSTRAINT proj_budget_nn NOT NULL,
    proj_ou_id          NUMBER(4)     CONSTRAINT proj_ou_fk REFERENCES org_unit,
    proj_planned_start_dt     DATE,
    proj_planned_finish_dt DATE,
    proj_actual_start_dt DATE
    * employee
    CREATE TABLE employee
    emp_id               NUMBER(6)     CONSTRAINT emp_pk PRIMARY KEY,
    emp_name          VARCHAR2(40)     CONSTRAINT emp_name_nn NOT NULL,
    emp_hiredate          DATE          CONSTRAINT emp_hiredate_nn NOT NULL,
    ou_id               NUMBER(4)      CONSTRAINT emp_ou_fk REFERENCES org_unit
    * activity
    * note each activity is associated with a project
    * act_type is the type of the activity, for example ANALYSIS, DESIGN, BUILD,
    * USER ACCEPTANCE TESTING ...
    * each activity has a people budget , in other words an amount to spend on
    * wages
    CREATE TABLE activity
    act_id               NUMBER(6),
    act_proj_id          NUMBER(5)     CONSTRAINT act_proj_fk REFERENCES project
                             CONSTRAINT act_proj_id_nn NOT NULL,
    act_name          VARCHAR2(40)     CONSTRAINT act_name_nn NOT NULL,
    act_type          VARCHAR2(30)     CONSTRAINT act_type_nn NOT NULL,
    act_planned_start_dt     DATE,
    act_actual_start_dt      DATE,
    act_planned_end_dt     DATE,
    act_actual_end_dt     DATE,
    act_planned_hours     number(6)     CONSTRAINT act_planned_hours_nn NOT NULL,
    act_people_budget     NUMBER(8,2)      CONSTRAINT act_people_budget_nn NOT NULL,
    CONSTRAINT act_pk PRIMARY KEY (act_id, act_proj_id)
    * employee on project
    * when an employee is assigned to a project, an hourly rate is set
    * remember that the persons manager depends on the project they are on
    * the implication being that the manager needs to be assigned to the project
    * before the 'managed'
    CREATE TABLE employee_on_project
    ep_emp_id          NUMBER(6)     CONSTRAINT ep_emp_fk REFERENCES employee,
    ep_proj_id          NUMBER(5)     CONSTRAINT ep_proj_fk REFERENCES project,
    ep_hourly_rate      NUMBER(5,2)      CONSTRAINT ep_hourly_rate_nn NOT NULL,
    ep_mgr_emp_id          NUMBER(6),
    CONSTRAINT ep_pk PRIMARY KEY(ep_emp_id, ep_proj_id),
    CONSTRAINT ep_mgr_fk FOREIGN KEY (ep_mgr_emp_id, ep_proj_id) REFERENCES employee_on_project
    * employee on activity
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE employee_on_activity
    ea_emp_id          NUMBER(6),
    ea_proj_id          NUMBER(5),
    ea_act_id          NUMBER(6),      
    ea_planned_hours      NUMBER(3)     CONSTRAINT ea_planned_hours_nn NOT NULL,
    CONSTRAINT ea_pk PRIMARY KEY(ea_emp_id, ea_proj_id, ea_act_id),
    CONSTRAINT ea_act_fk FOREIGN KEY (ea_act_id, ea_proj_id) REFERENCES activity ,
    CONSTRAINT ea_ep_fk FOREIGN KEY (ea_emp_id, ea_proj_id) REFERENCES employee_on_project
    * activity order
    * only need a prior activity. If activity A is followed by activity B then
    (B is the prior activity of A)
    CREATE TABLE activity_order
    ao_act_id          NUMBER(6),      
    ao_proj_id          NUMBER(5),
    ao_prior_act_id      NUMBER(6),
    CONSTRAINT ao_pk PRIMARY KEY (ao_act_id, ao_prior_act_id, ao_proj_id),
    CONSTRAINT ao_act_fk FOREIGN KEY (ao_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id),
    CONSTRAINT ao_prior_act_fk FOREIGN KEY (ao_prior_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id)
    * work unit
    * remember that DATE includes time
    CREATE TABLE work_unit
    wu_emp_id          NUMBER(5),
    wu_act_id          NUMBER(6),
    wu_proj_id          NUMBER(5),
    wu_start_dt          DATE CONSTRAINT wu_start_dt_nn NOT NULL,
    wu_end_dt          DATE CONSTRAINT wu_end_dt_nn NOT NULL,
    CONSTRAINT wu_pk PRIMARY KEY (wu_emp_id, wu_proj_id, wu_act_id, wu_start_dt),
    CONSTRAINT wu_ea_fk FOREIGN KEY (wu_emp_id, wu_proj_id, wu_act_id)
              REFERENCES employee_on_activity( ea_emp_id, ea_proj_id, ea_act_id)
    /* enter data */
    start ouins
    start empins
    start projins
    start actins
    start aoins
    start epins
    start eains
    start wuins
    start pmselect
    I have the tables containing ouins and the rest. email me on [email protected] if you want to have a look at the tables.

    Answer to your 2nd question is easy. Create database roles for the various groups of people who are allowed to access or perform various DML actions.
    The assign the various users to these groups. The users will be restricted to what the roles are restricted to.
    Look up roles if you are not familiar with it.

  • How to tune the performance of Oracle SQL/XML query?

    Hi all,
    I am running Oracle 9i and like to run the following Oracle SQL/XML query. It takes about 3+ hour and still not finish. If I get rid all the XML stuffs it only take minutes to run. Does anybody know how to what's the reason of this slow and how to tune it?
    SELECT XMLElement("CUSTOMER",
    XMLForest(C_CUSTKEY "C_CUSTKEY", C_NAME "C_NAME", C_ADDRESS "C_ADDRESS", C_PHONE "C_PHONE", C_MKTSEGMENT "C_MKTSEGMENT", C_COMMENT "C_COMMENT"),
    (SELECT XMLAgg(XMLElement("ORDERS",
    XMLForest(O_ORDERKEY "O_ORDERKEY", O_CUSTKEY "O_CUSTKEY", O_ORDERSTATUS "O_ORDERSTATUS", O_ORDERPRIORITY "O_ORDERPRIORITY", O_CLERK "O_CLERK", O_COMMENT "O_COMMENT"),
    (SELECT XMLAgg(XMLElement("LINEITEM",
    XMLForest(L_ORDERKEY "L_ORDERKEY", L_RETURNFLAG "L_RETURNFLAG", L_LINESTATUS "L_LINESTATUS", L_SHIPINSTRUCT "L_SHIPINSTRUCT", L_SHIPMODE "L_SHIPMODE", L_COMMENT "L_COMMENT")
    FROM LINEITEM
    WHERE LINEITEM.L_ORDERKEY = ORDERS.O_ORDERKEY)
    FROM ORDERS
    WHERE ORDERS.O_CUSTKEY = CUSTOMER.C_CUSTKEY)
    FROM CUSTOMER ;
    Thanks very much in advance for your time,
    Jinghao Liu

    ajallen wrote:
    Why not something more like
    SELECT *
    FROM fact1 l,
    FULL OUTER JOIN fact1 d
    ON l.company = d.company
    AND l.transactiontypeid = 1
    AND d.transactiontypeid = 2;
    Because this is not an equivalent of the original query.
    drop table t1 cascade constraints purge;
    drop table t2 cascade constraints purge;
    create table t1 as select rownum t1_id from dual connect by level <= 5;
    create table t2 as select rownum+2 t2_id from dual connect by level <= 5;
    select * from (select * from t1 where t1_id > 2) t1 full outer join t2 on (t1_id = t2_id);
    select * from t1 full outer join t2 on (t1_id = t2_id and t1_id > 2);
         T1_ID      T2_ID
             3          3
             4          4
             5          5
                        6
                        7
         T1_ID      T2_ID
             1
             2
             3          3
             4          4
             5          5
                        6
                        7

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Required info on SQL Server Performance Issue Analysis and Troubleshoot way

    Dear All,
    I am going to prepare the simple documentation steps on SQL Server Performance Issue Analysis and troubleshoot method. I am struggling to make this documentation since we have different checklist (like network latency,disk latency, memory/processor pressure,SQL
    query tuning etc) to validate once application performance issue reported from the customer.So, I am looking for the experts document or link sharing .
    Your input will help for document preparation in better way.
    Thanks in advance.

    Hi,
    Recommendations and Guidelines on configuring disk partitions for SQL Server
    http://support.microsoft.com/kb/2023571
    Disk and File Layout for SQL Server
    https://blogs.technet.com/b/dataplatforminsider/archive/2012/12/19/disk-and-file-layout-for-sql-server.aspx
    Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure
    http://www.packtpub.com/article/sql-server-2012-implementing-physical-database-strusture
    Database Mirroring Best Practices and Performance Considerations
    http://technet.microsoft.com/en-us/library/cc917681.aspx
    Hope the information helps.
    Tracy Cai
    TechNet Community Support

  • Learning sql and pssql ,xml,forms and report

    ihave learned sql and plsql i donot Xml and forms and reports  how to learn by own by easy method
    by sql and psql we have learned from basic to higher ,weather things going from book to online (any other way to practising and understanding the concepts)

    learning sql and pssql ,xml,forms and repojavascript:;learning sql and pssql ,xml,forms and reportrt for beginners

  • Diff b/w Run time Analyizer(se30),Sql Trace (st05) and Performance Analyzie

    Can any one tell me the Diff b/w Run time Analyizer(se30),Sql Trace (st05) and Performance Analyzie(al21) ?

    Hi
    these all are doing t he same thing that is checking the program for better performance
    Tools for Performance Analysis
    Run time analysis transaction SE30
    SQL Trace transaction ST05
    Extended Program Check (SLIN)
    Code Inspector ( SCI)
    Run time analysis transaction SE30 :This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing. 
    SQL Trace transaction ST05: The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on a particular database table of the ABAP program would be mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
    Extended Program Check
    This can be called in through transaction SE38 or through transaction SLIN. This indicates possible problems that may cause performance problems.
    Code Inspector (SCI)
    You can call the Code Inspector from the ABAP Editor (SE38), the Function Builder (SE37), the Class Builder (SE24), or as a separate transaction (SCI).
    The Code Inspector indicates possible problems. However, note that, especially with performance issues: There is no rule without exception. If a program passes an inspection, it does not necessarily mean that this program will have no performance problems.
    reward if usefull

  • Extracting XML tags and its values using ORACLE PL/SQL

    I need help to create a procedure that receive a XML string and insert it into the table after parsing it.
    If the XML has 10 tags then it should insert into the table 10 rows. Table will be having tag name and value. This will be plain XML tags without attributes in tags.

    Hi,
    I am able to retreive the values of the tags using extract function (example: lv_xml.EXTRACT ('/ROWSET/EMPLOYEE/EMPNO/text()').getstringval() ). But could you help me to find some way to extract the tags also since we have no idea what the tags are and how many tags are there, in advance.
    Anybody's help will be a releif!
    Thanks in advance,
    Leslie
    Message was edited by:
    user544830

  • How can I take the part that parses the .XML file and make it a procedure.

    CREATE OR REPLACE PACKAGE BODY XMLSTUD6 AS
    Author: Jimmy Harris
    Created: 5/25/2006
    Purpose: 1.This package has an XML file initialized to a variable called DOC .
              2.It will then take the values from the XML file and insert them into a PL/SQL table.
              3.From the PL/SQL table it will insert values into the STUDENTS table.
              4.After step four above, the STUDLOAD procedure will insert (Sequence,Status, .XML file, USER, SYSDATE, ERROR_Message
                             into the AUDIT_XMLSTUD table regardless if insert status was successfull or not status is indicated by either an Y or
              NO and the original XML filed that was currently processed, the date and user who executed the procedure.
              If the status was NO then it will insert the Oracle SQLERRM massage, into the REASON_FOR_ERROR column.
                             If status is Y then REASOK_FOR_ERROR IS NULL.
                             5,Make sure you embed the xml file with an inner and outer ' ' ie: ' the whole .xml file string ' as the input
                             parameter into the STUDLOAD procedure.
    This package excepts the whole .XML file as a CLOB as an input parameter, so that the end-user will not have
                                  modify the code.      
    Modification History:     1.6/09/2006 JImmy Harris Modified code, added the Function "WORD_CONVERTER1" to accept the requested text data and
    return a coded value back to our Welligent system.     
                                  2. Was advised that a front end type of functionality was not neccesary for this issue so I removed the INSERT_XML_FILE,
                                  UPDATE_XML_FILE and the INSERT_XML_file.
    FUNCTION WORD_CONVERTER1 (v_domain IN VARCHAR2 := null,
    v_incoming IN VARCHAR2 := null) RETURN VARCHAR2 IS
    v_well VARCHAR2(32);
    v_editdd BOOLEAN;
    v_code VARCHAR2(32);
    CURSOR C_conv_wrd IS
    SELECT WELL
    INTO v_code
    FROM CONVERSION_TABLE
    WHERE DOMAIN = UPPER(TRIM(v_domain))
    AND INCOMING = UPPER(TRIM(v_incoming));
    BEGIN
    OPEN c_conv_wrd;
    LOOP
    FETCH c_conv_wrd INTO v_code;
    EXIT WHEN c_conv_wrd%NOTFOUND;
    END LOOP;
    CLOSE c_conv_wrd;
    RETURN v_code;
    END WORD_CONVERTER1;
    PROCEDURE STUDLOAD (DOC CLOB) IS
    v_parser xmlparser.Parser;
    v_doc xmldom.DOMDocument;
    v_nl xmldom.DOMNodeList;
    v_n xmldom.DOMNode;
    v_mm NUMBER;
    v_dd NUMBER;
    v_yyyy NUMBER;
    v_DATE DATE;
    v_race VARCHAR2(1);
    v_eth VARCHAR2(1);
    v_prim_lang VARCHAR2(1);
    v_house_lang VARCHAR2(1);
    v_gender VARCHAR2(1);
    TYPE stuxml_type IS TABLE OF STUDENTS%ROWTYPE;
    s_tab stuxml_type := stuxml_type();
    v_success VARCHAR2(200);
    v_failure VARCHAR2(200);
    l_error_code varchar2(200);
    BEGIN
    -- Create a parser.
    v_parser := xmlparser.newParser;
    xmlparser.setValidationMode(v_parser, FALSE);
    -- Parse the document and create a new DOM document.
    SYS.XMLPARSER.PARSECLOB ( v_parser, DOC );
    v_doc := SYS.XMLPARSER.getDocument(v_parser);
    -- Free resources associated with the Parser now it is no longer needed.
    xmlparser.freeParser(v_parser);
    -- Get a list of all the STUD nodes in the document using the XPATH syntax.
    v_nl := xslprocessor.selectNodes(xmldom.makeNode(v_doc),'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent');
    -- Loop through the list and create a new record in a table collection for each STUD record.
    FOR stud IN 0 .. xmldom.getLength(v_nl) - 1 LOOP
    v_n := xmldom.item(v_nl, stud);
    s_tab.extend;
    -- Use XPATH syntax to assign values to he elements of the collection.
    s_tab(s_tab.last).STUDENT_LAST_NAME :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/LastName');
         s_tab(s_tab.last).STUDENT_FIRST_NAME :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/FirstName');
         s_tab(s_tab.last).STUDENT_MI :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/MiddleName');
         v_dd := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Day');
         v_mm := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Month');
    v_yyyy := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Year');
         v_DATE := TO_DATE(v_mm||' '||v_dd||' '||v_yyyy,'MMDDYYYY');
         s_tab(s_tab.last).STUDENT_DOB := v_date;
         s_tab(s_tab.last).STUDENT_STREET :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/Street');
         s_tab(s_tab.last).STUDENT_APART_NO :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/ApartmentNumber');
         s_tab(s_tab.last).STUDENT_COUNTY :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/County');
         s_tab(s_tab.last).STUDENT_STATE :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/State');
         s_tab(s_tab.last).STUDENT_ZIP :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/ZipCode');
    v_race := WORD_CONVERTER1('RACE',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Race')));
         v_eth := WORD_CONVERTER1('EHTNICITY',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Ethnicity')));
         v_prim_lang:= WORD_CONVERTER1('PRIMARY_LANG',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/PrimaryLanguage')));
         v_house_lang:= WORD_CONVERTER1('SECONDARY_LANG',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/HouseholdLanguage')));
         v_gender := WORD_CONVERTER1('GENDER',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Gender')));
    s_tab(s_tab.last).STUDENT_RACE := v_race;
         s_tab(s_tab.last).STUDENT_ETHNIC := v_eth;
         s_tab(s_tab.last).STUDENT_PRI_LANG :=v_prim_lang;
         s_tab(s_tab.last).STUDENT_SEC_LANG := v_house_lang;
         s_tab(s_tab.last).STUDENT_GENDER :=v_gender;
    END LOOP;
    FOR stud IN s_tab.first..s_tab.last LOOP
    INSERT INTO STUDENTS (SHISID, SSN, DOE_SCHOOL_NUMBER,PATIENT_TYPE, TEACHER, HOMEROOM,STUDENT_LAST_NAME, STUDENT_FIRST_NAME, STUDENT_MI,STUDENT_DOB,
    STUDENT_BIRTH_CERT, STUDENT_COMM,STUDENT_MUSA, STUDENT_FAMSIZE, STUDENT_FAMINCOME,STUDENT_UNINSURED, STUDENT_LUNCH, STUDENT_ZIP,STUDENT_STATE,
    STUDENT_COUNTY, STUDENT_STREET,STUDENT_APART_NO, STUDENT_PHONE, STUDENT_H2O_TYPE,STUDENT_WASTE_TRT, STUDENT_HOME_SET, STUDENT_NONHOME_SET,
    STUDENT_GENDER, STUDENT_RACE, STUDENT_ETHNIC,STUDENT_PRI_LANG, STUDENT_SEC_LANG, STUDENT_ATRISK,EMER_COND_MEMO, ASSIST_DEVICE_TYPE,
    SCHOOL_ENTER_AGE,STUDENT_CURR_GRADE, S504_ELIG_DATE, S504_DEV_DATE,S504_REV_DATE, STUDENT_504, STUDENT_IEP,IEP_EXP_DATE, GRAD_CLASS, TYPE_DIPLOMA,
    GRADE_RETAIN, LIT_PASS_TEST_MATH, LIT_PASS_DATE_MATH,LIT_PASS_TEST_WRITE, LIT_PASS_DATE_WRITE, LIT_PASS_TEST_READ,LIT_PASS_DATE_READ, SPEC_ED_ELIG,
    SPEC_ED_CODE,TRANSPORT_CODE, TRANSPORT_NO, PRIME_HANDICAP,PRIME_HANDICAP_PERCENT, PRIME_HANDI_MANAGER, FIRST_ADD_HANDI,FIRST_ADD_HANDICAP_PERCENT,
    FIRST_ADD_HANDI_504, FIRST_ADD_HANDI_504_DATE, SECOND_ADD_HANDI, SECOND_ADD_HANDICAP_PERCENT, MED_EXTERNAL_NAME, INS_TYPE, INS_PRI, INS_NAME,
    INS_MEDICAID_NO, ELIGDATE, INS_PRIV_INSURANCE, INS_APPR_BILL, INS_APPR_DATE, INS_PARENT_APPR,INS_POL_NAME, INS_POL_NO, INS_CARRIER_NO,
    INS_CARRIER_NAME, INS_CARRIER_RELATE, INS_AFFECT_DATE, INS_COPAY_OV, INS_COPAY_RX, INS_COPAY_AMBUL,INS_COPAY_EMER, INS_COPAY_OUTPAT,STUDENT_INACTIVE,
    PHYS_ID, ENCOUNTERNUM,USERID,MODDATE, STUDENT_ID, S504_DISABILITY,CHAPTER1, WELLNESS_ENROLL, SCHOOL_OF_RESIDENCE,INITIAL_IEP_DATE, CALENDAR_TRACK,
    USA_BORN,ALT_ID, FUTURE_SCHOOL, IEP_LAST_MEETING,IEP_LAST_SETTING, IEP_LAST_REFER_EVAL, THIRD_ADD_HANDI,LEP, GIFTED, IEP_EXIT_REASON,
    CASE_MANAGER_ID, INTAKE_NOTES, CALLER_PHONE,CALL_DATE, CALLER_RELATIONSHIP, CALLER_NAME,BUSINESS_PHONE, FAX, EMAIL,HIGHEST_EDUCATION, INTAKE_DATE,
    SERVICE_COORDINATOR, DISCHARGE_DATE, DISCHARGE_REASON, DISCHARGE_NOTES,INTAKE_BY, INTAKE_STATUS, IEP_LAST_SERVED_DATE,IEP_APC_DATE, IEP_EXIT_DATE,
    ADDRESS2, LEGAL_STATUS, RELIGION, EMPLOYMENT_STATUS, TARG_POP_GROUP1, TARG_POP_GROUP2, MARITAL_STATUS,THIRD_ADD_HANDI_PERCENT, LAST_INTERFACE_DATE,
    SERVICE_PLAN_TYPE,CURRENT_JURISDICTION, FIPS, BIRTH_PLACE_JURISDICTION,BIRTH_PLACE_HOSPITAL, BIRTH_PLACE_STATE, BIRTH_PLACE_COUNTRY,
    OTHER_CLIENT_NAME, SIBLINGS_WITH_SERVICES, PERM_SHARE_INFORMATION,PERM_VERIFY_INSURANCE, REFERRING_AGENCY, REFERRING_INDIVIDUAL,AUTOMATIC_ELIGIBILITY,
    INTAKE_IEP_ID, FUTURE_SCHOOL2,FUTURE_SCHOOL3, TRANSLATOR_NEEDED, TOTAL_CHILDREN_IN_HOME,REFERRED_BY, FAMILY_ID, SCREENING_CONSENT_FLAG,PICTURE_FILE,
    DUAL_ENROLLED, DOE_SCHOOL_NUMBER2)
    VALUES (123456789025, null,null ,null,null,null ,s_tab(stud).STUDENT_LAST_NAME,s_tab(stud).STUDENT_FIRST_NAME,s_tab(stud).STUDENT_MI,
    s_tab(stud).STUDENT_DOB,null ,null,null ,null,null,null,null,s_tab(stud).STUDENT_ZIP,s_tab(stud).STUDENT_STATE ,s_tab(stud).STUDENT_COUNTY,
    s_tab(stud).STUDENT_STREET,s_tab(stud).STUDENT_APART_NO,null,null,null ,null , null,
    s_tab(stud).STUDENT_GENDER ,s_tab(stud).STUDENT_RACE , s_tab(stud).STUDENT_ETHNIC,
    s_tab(stud).STUDENT_PRI_LANG ,s_tab(stud).STUDENT_SEC_LANG, null, null ,null , null,
    null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null,
    null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null,
    null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null,
    null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null,
    null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null,
    null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null ,null , null, null, null,null );
    END LOOP;
    INSERT INTO AUDIT_XMLSTUD1(XMLSTUDPK,USERID,XMLFILE,STATUS,REASON_FOR_ERROR,DATE_MODIFIED)
    VALUES(SEQ_XMLSTUD1.NEXTVAL,USER,DOC,'Y',NULL,SYSDATE);
    HTP.HTMLOPEN;
    v_success:= 'The values from the .XML file have been successfully inserted into the STUDENTS table in the Oracle Database.';
    htp.bold(v_success);
    HTP.HTMLCLOSE;
    COMMIT;
    -- Free any resources associated with the document now it that it is no longer needed.
    xmldom.freeDocument(v_doc);
    EXCEPTION
    WHEN OTHERS THEN
    l_error_code := SQLERRM;
    INSERT INTO AUDIT_XMLSTUD1(XMLSTUDPK,USERID,XMLFILE,STATUS,REASON_FOR_ERROR,DATE_MODIFIED)
    VALUES(SEQ_XMLSTUD1.NEXTVAL,USER,nvl(DOC,TO_CLOB('No .XML file entered, user pressed button without entering correct information.')),'NO',l_error_code,SYSDATE);
    HTP.HTMLOPEN;
    v_failure:= 'The attempt made to insert files to the Student table has failed because,'||l_error_code;
    htp.bold(v_failure);
    HTP.HTMLCLOSE;
    COMMIT;
    END STUDLOAD;
    PROCEDURE UPDSTUDLOAD (DOC CLOB) IS
    v_parser xmlparser.Parser;
    v_doc xmldom.DOMDocument;
    v_nl xmldom.DOMNodeList;
    v_n xmldom.DOMNode;
    v_mm NUMBER;
    v_dd NUMBER;
    v_yyyy NUMBER;
    v_DATE DATE;
    v_race VARCHAR2(1);
    v_eth VARCHAR2(1);
    v_prim_lang VARCHAR2(1);
    v_house_lang VARCHAR2(1);
    v_gender VARCHAR2(1);
    TYPE stuxml_type IS TABLE OF STUDENTS%ROWTYPE;
    s_tab stuxml_type := stuxml_type();
    v_success VARCHAR2(200);
    v_failure VARCHAR2(200);
    l_error_code varchar2(200);
    BEGIN
    -- Create a parser.
    v_parser := xmlparser.newParser;
    xmlparser.setValidationMode(v_parser, FALSE);
    -- Parse the document and create a new DOM document.
    SYS.XMLPARSER.PARSECLOB ( v_parser, DOC );
    v_doc := SYS.XMLPARSER.getDocument(v_parser);
    -- Free resources associated with the Parser now it is no longer needed.
    xmlparser.freeParser(v_parser);
    -- Get a list of all the STUD nodes in the document using the XPATH syntax.
    v_nl := xslprocessor.selectNodes(xmldom.makeNode(v_doc),'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent');
    -- Loop through the list and create a new record in a table collection for each STUD record.
    FOR stud IN 0 .. xmldom.getLength(v_nl) - 1 LOOP
    v_n := xmldom.item(v_nl, stud);
    s_tab.extend;
    -- Use XPATH syntax to assign values to he elements of the collection.
    s_tab(s_tab.last).STUDENT_LAST_NAME :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/LastName');
         s_tab(s_tab.last).STUDENT_FIRST_NAME :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/FirstName');
         s_tab(s_tab.last).STUDENT_MI :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/MiddleName');
         v_dd := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Day');
         v_mm := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Month');
    v_yyyy := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Year');
         v_DATE := TO_DATE(v_mm||' '||v_dd||' '||v_yyyy,'MMDDYYYY');
         s_tab(s_tab.last).STUDENT_DOB := v_date;
         s_tab(s_tab.last).STUDENT_STREET :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/Street');
         s_tab(s_tab.last).STUDENT_APART_NO :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/ApartmentNumber');
         s_tab(s_tab.last).STUDENT_COUNTY :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/County');
         s_tab(s_tab.last).STUDENT_STATE :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/State');
         s_tab(s_tab.last).STUDENT_ZIP :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/ZipCode');
    v_race := WORD_CONVERTER1('RACE',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Race')));
         v_eth := WORD_CONVERTER1('EHTNICITY',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Ethnicity')));
         v_prim_lang:= WORD_CONVERTER1('PRIMARY_LANG',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/PrimaryLanguage')));
         v_house_lang:= WORD_CONVERTER1('SECONDARY_LANG',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/HouseholdLanguage')));
         v_gender := WORD_CONVERTER1('GENDER',UPPER(xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Gender')));
    s_tab(s_tab.last).STUDENT_RACE := v_race;
         s_tab(s_tab.last).STUDENT_ETHNIC := v_eth;
         s_tab(s_tab.last).STUDENT_PRI_LANG :=v_prim_lang;
         s_tab(s_tab.last).STUDENT_SEC_LANG := v_house_lang;
         s_tab(s_tab.last).STUDENT_GENDER :=v_gender;
    END LOOP;
    FOR stud IN s_tab.first..s_tab.last LOOP
         UPDATE STUDENTS
         SET
         STUDENT_LAST_NAME = s_tab(stud).STUDENT_LAST_NAME,
         STUDENT_FIRST_NAME = s_tab(stud).STUDENT_FIRST_NAME,
         STUDENT_MI = s_tab(stud).STUDENT_MI,
         STUDENT_DOB = s_tab(stud).STUDENT_DOB,
         STUDENT_ZIP = s_tab(stud).STUDENT_ZIP,
         STUDENT_STATE = s_tab(stud).STUDENT_STATE,
         STUDENT_COUNTY = s_tab(stud).STUDENT_COUNTY,
         STUDENT_STREET = s_tab(stud).STUDENT_STREET,
         STUDENT_APART_NO = s_tab(stud).STUDENT_APART_NO
         WHERE SHISID = 123456789025;
    END LOOP;
    INSERT INTO AUDIT_XMLSTUD1(XMLSTUDPK,USERID,XMLFILE,STATUS,REASON_FOR_ERROR,DATE_MODIFIED)
    VALUES(SEQ_XMLSTUD1.NEXTVAL,USER,DOC,'Y',NULL,SYSDATE);
    HTP.HTMLOPEN;
    v_success:= 'The updated .XML file has been successfully saved to the STUDENTS table in the Oracle Database.';
    htp.bold(v_success);
    HTP.HTMLCLOSE;
    COMMIT;
    -- Free any resources associated with the document now it that it is no longer needed.
    xmldom.freeDocument(v_doc);
    EXCEPTION
    WHEN OTHERS THEN
    l_error_code := SQLERRM;
    INSERT INTO AUDIT_XMLSTUD1(XMLSTUDPK,USERID,XMLFILE,STATUS,REASON_FOR_ERROR,DATE_MODIFIED)
    VALUES(SEQ_XMLSTUD1.NEXTVAL,USER,nvl(DOC,TO_CLOB('No .XML file entered, user pressed button without entering correct information.')),'NO',l_error_code,SYSDATE);
    HTP.HTMLOPEN;
    v_failure:= 'The attempt made to insert files to the Student table has failed because,'||l_error_code;
    htp.bold(v_failure);
    HTP.HTMLCLOSE;
    COMMIT;
    END UPDSTUDLOAD;
    PROCEDURE DELSTUDLOAD (DOC CLOB) IS
    v_parser xmlparser.Parser;
    v_doc xmldom.DOMDocument;
    v_nl xmldom.DOMNodeList;
    v_n xmldom.DOMNode;
    v_mm NUMBER;
    v_dd NUMBER;
    v_yyyy NUMBER;
    v_DATE DATE;
    TYPE stuxml_type IS TABLE OF STUDENTS%ROWTYPE;
    s_tab stuxml_type := stuxml_type();
    v_success VARCHAR2(200);
    v_failure VARCHAR2(200);
    l_error_code varchar2(200);
    BEGIN
    -- Create a parser.
    v_parser := xmlparser.newParser;
    xmlparser.setValidationMode(v_parser, FALSE);
    -- Parse the document and create a new DOM document.
    SYS.XMLPARSER.PARSECLOB ( v_parser, DOC );
    v_doc := SYS.XMLPARSER.getDocument(v_parser);
    -- Free resources associated with the Parser now it is no longer needed.
    xmlparser.freeParser(v_parser);
    -- Get a list of all the STUD nodes in the document using the XPATH syntax.
    v_nl := xslprocessor.selectNodes(xmldom.makeNode(v_doc),'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent');
    -- Loop through the list and create a new record in a table collection for each STUD record.
    FOR stud IN 0 .. xmldom.getLength(v_nl) - 1 LOOP
    v_n := xmldom.item(v_nl, stud);
    s_tab.extend;
    -- Use XPATH syntax to assign values to he elements of the collection.
    s_tab(s_tab.last).STUDENT_LAST_NAME :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/LastName');
         s_tab(s_tab.last).STUDENT_FIRST_NAME :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/FirstName');
         s_tab(s_tab.last).STUDENT_MI :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Name/MiddleName');
         v_dd := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Day');
         v_mm := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Month');
    v_yyyy := xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/BirthDate/Year');
         v_DATE := TO_DATE(v_mm||' '||v_dd||' '||v_yyyy,'MMDDYYYY');
         s_tab(s_tab.last).STUDENT_DOB := v_date;
         s_tab(s_tab.last).STUDENT_STREET :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/Street');
         s_tab(s_tab.last).STUDENT_APART_NO :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/ApartmentNumber');
         s_tab(s_tab.last).STUDENT_COUNTY :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/County');
         s_tab(s_tab.last).STUDENT_STATE :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/State');
         s_tab(s_tab.last).STUDENT_ZIP :=xslprocessor.valueOf(v_n,'/com.welligent.Student.BasicStudent.Create/DataArea/NewData/BasicStudent/Address/ZipCode');
    END LOOP;
    FOR stud IN s_tab.first..s_tab.last LOOP
         DELETE FROM STUDENTS
         WHERE SHISID = 123456789025;
    END LOOP;
    INSERT INTO AUDIT_XMLSTUD1(XMLSTUDPK,USERID,XMLFILE,STATUS,REASON_FOR_ERROR,DATE_MODIFIED)
    VALUES(SEQ_XMLSTUD1.NEXTVAL,USER,DOC,'Y',NULL,SYSDATE);
    HTP.HTMLOPEN;
    v_success:= 'The .XML file has been successfully deleted from the STUDENTS table in the Oracle Database.';
    htp.bold(v_success);
    HTP.HTMLCLOSE;
    COMMIT;
    -- Free any resources associated with the document now it that it is no longer needed.
    xmldom.freeDocument(v_doc);
    EXCEPTION
    WHEN OTHERS THEN
    l_error_code := SQLERRM;
    INSERT INTO AUDIT_XMLSTUD1(XMLSTUDPK,USERID,XMLFILE,STATUS,REASON_FOR_ERROR,DATE_MODIFIED)
    VALUES(SEQ_XMLSTUD1.NEXTVAL,USER,nvl(DOC,TO_CLOB('No .XML file entered, user pressed button without entering correct information.')),'NO',l_error_code,SYSDATE);
    HTP.HTMLOPEN;
    v_failure:= 'The attempt made to insert files to the Student table has failed because,'||l_error_code;
    htp.bold(v_failure);
    HTP.HTMLCLOSE;
    COMMIT;
    END DELSTUDLOAD;
    END XMLSTUD6;

    Try opening the problem files using a text editor or file viewer to see what the first few bytes contain. All valid FM binary files for FM 11 will contain <MakerFile 11.0> in the first bytes of the file.
    When updating books, it's sometimes better to just to create a new book file and add the files to that.
    When renaming files in a book, changes at the system level will break any links/cross-references between files, so it's always best to use the Rename option in the Book file to change FM file names. This will maintain the correct linkages.

  • NET8의 LOGGING AND TRACE관련 PARAMETER에 대한 Q & A

    제품 : SQL*NET
    작성날짜 : 1999-07-30
    NET8의 LOGGING AND TRACE관련 PARAMETER에 대한 Q & A
    ==================================================
    PURPOSE
    NET8의 LOGGING AND TRACE관련 PARAMETER에 대해 알아 보도록한다
    Explanation
    1. NET8에서 trace를 왜 사용하고 어떤 component들에 trace를 할 수 있나요 ?
    Trace의 특징은 네트워크을 수행하게 될때 network event들을 기술한다
    즉 trace와 관련된 일련의 문장들이 자세하게 생성된다.
    "Tracing"의 운영으로 log파일에 제공되어 있는 것 보다 NET8의 component들의
    내부적인 정보를 보다 많이 얻을 수 있다.
    이러한 정보는 에러의 결과로 인하여 발생하는 동일한 event들로 파일들에
    결과가 생성되어 이를 이용하여 문제의 원인을 판단할 수 있다.
    주의 : trace의 기능을 이용하는 경우 충분한 disk space와 system
    performance의 현격한 저하를 가져올 수 있다.
    즉 trace의 기능은 반드시 필요할 경우에만 사용할 것을 권한다.
    Example
    Reference Ducumment
    << trace의 기능을 이용하여 trace를 할수 있는 component들 >>
    * Network listener
    * Net8 components on the client and server
    * Connection Manager
    * Oracle Names Server
    * Oracle Names Control Utility
    * TNSPING utility
    2. 어떤 parameter들을 설정하면 trace 기능을 이용할 수 있는가 ?
    tracing을 하기 위해서는 특정 trace parameter들을 설정함으로써 가능하며
    아래에 주어진 방법들과 또는 utility들중 하나를 선택하여 설정함으로써
    사용할 수 있다.
    * Component Configuration Files
    * Component Control Utilities
    * Oracle Trace
    component의 configuration 파일을 이용하여 traceing parameter를 설정하려면
    1) component의 configuration 파일에 다음의 traceing parameter를 설정한다.
    - SQLNET.ORA for client or server, LISTENER.ORA for listener:
    TRACE_LEVEL_<CLIENT/LISTENER/SERVER>=(0/4/10/16)
    TRACE_DIRECTORY_<CLIENT/LISTENER/SERVER>=<directory name>
    LOG_DIRECTORY_<CLIENT/LISTENER/SERVER>=<directory name>
    2) 만일 component들이 수행중인 동안 configuration 파일의 수정이 있었다면
    변경된 parameter들을 사용하기 위해 component들을 다시 시작하여야 한다.
    component control utility들을 이용하여 trace parameter들을 설정하려면
    1) listener의 경우, Listener Control Utility(lsnrctl)에서 TRACE 명령어를
    이용하여 listener가 수행중인 동안에도 trace level을 설정할 수 있다.
    EX)
    RC80:/mnt3/rctest80> lsnrctl
    LSNRCTL for SVR4: Version 8.0.4.0.0 - Production on 01-SEP-98 15:16:52
    (c) Copyright 1997 Oracle Corporation. All rights reserved.
    Welcome to LSNRCTL, type "help" for information.
    LSNRCTL> trace admin
    Connecting to (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY))
    Opened trace file: /mnt4/coe/app/oracle/product/8.0.4/network/trace/
    lsnr_coe.trc
    The command completed successfully
    LSNRCTL> trace off
    Connecting to (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY))
    The command completed successfully
    LSNRCTL> exit
    RC80:/mnt3/rctest80>
    2) Oracle Names의 경우, Names Control Utility(namesctl)에서 TRACE_LEVEL
    명령어를 이용하여 Oracle Names가 수행중인 동안에도 trace level을
    설정할 수 있다.
    주의 : Connection Manager의 경우, trace level은 configuration 파일인
    CMAN.ORA 에서만 설정할 수 있다.
    Oracle Enterprose manager(이하 OEM)에 있는 Oracle Trace는 trace parameter
    들을 설정하고 GUI를 통해 trace data의 형태를 볼수 있도록 하는 tracing tool
    이다.
    3. Trace된 data를 해석할 수 있는 다른 utility들이 있다면 ?
    Trace Assistant를 사용하면 사용자의 *.trc 파일 (SQL*Net v2의 형식에 의해
    생성된) 또는 *.txt (Orace Trace 과 TRCFMT에 의해 생성된 출력물)을 통해
    trac된 정보를 해석할 수 있다.
    이 유틸리티 네트워크의 문제들로 인해 발생하는 문제점들을 진단하고
    해결하는 데 보다 많은 정보를 제공하여 사용자의 이해를 돕는다.
    * the source and destination of trace files
    * the flow of packets between network nodes
    * which component of Net8 is failing
    * pertinent error codes
    다음에 주어진 명령어를 수행하므로써 Trace Assistant 실행할 수 있다.
    trcasst [options] <filename>
    Trace Assistant Text Formatting Options
    -o Displays connectivity and Two Task Common (TTC) information.
    After the -o the following options may be used:
    c (for summary connectivity information)
    d (for detailed connectivity information)
    u (for summary TTC information)
    t (for detailed TTC information)
    q (displays SQL commands enhancing summary TTC
    information)
    -p Oracle Internal Use Only
    -s Displays statistical information
    -e Enables display of error information After the -e, zero
    or one error decoding level may follow:
    0 or nothing (translates the NS error numbers dumped
    from the nserror function plus lists all
    other errors)
    1 (displays only the NS error translation from
    the nserror function)
    2 (displays error numbers without translation)
    만일 option들이 제공되지 않는다면 기본적으로 -odt -e -s가 지정되어 자세한
    connectivity, Two-Task Common, 에러 level 0 그리고 통계정보들이 tracing
    된다.
    4. SQL*Net v2 tracing과 어떻게 다른가 ?
    Net8 tracing에서는 이전 버전인 SQL*NET V2에서 제공 되는 모든 option을
    포함하고 있고 Oracle Trace의 기능이 추가되었다.
    이것은 Oracle Trace Repository를 OEM 콘솔을 통하여 사용자의 trace 정보를
    관리할 수 있도록 허용한다.
    5. *.cdf와 *.dat은 어떤 파일 인가 ?
    *.cdf 와 *.dat 파일들은 Oracle Trace에 의해 생성되는 파일들로서 이 파일들을
    읽기 위해서는 반드시 trcfmt utility를 이용해야만 한다.
    trcfmt는 binary (*.dat와 *.cdf의 확장자) 파일내에 있는 data를 일반text
    (.txt의 확장자)로 정보를 추출한다. 이 tool을 사용하기 위해서는 다음의
    명령어를 이용하면 된다.
    $ trcfmt collection.cdf
    주의 : .cdf와 .dat파일이 존재하는 디렉토리가 아닌 곳에서 이 tool을 이용
    한다면 path가 포함되야 한다. 만일 하나의 .cdf 와 .dat 파일들내에
    여러 프로세스들의 traceing정보가 수집된다면 그것들은 process_id.txt
    의 이름과 함께 파일이 추출될 것이다.
    6. trac관련 configuration은 어떤 것이 있으며 설정할 수 있는 parameter는
    무엇이 있는가 ?
    ==========================================================================
    || SQLNET.ORA Parameters ||
    ==========================================================================
    DAEMON.TRACE_DIRECTORY
    Purpose: Controls the destination directory of the Oracle
    Enterprise Manager daemon trace file
    Default Value: $ORACLE_HOME/network/trace
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_DIRECTORY=/oracle/traces
    DAEMON.TRACE_LEVEL
    Purpose: Turns tracing on/off to a certain specified level for
    the Oracle Enterprise Manager daemon.
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_LEVEL=10
    DAEMON.TRACE_MASK
    Purpose: Specifies that only the Oracle Enterprise Manager daemon
    trace entries are logged into the trace file.
    Default Value: $ORACLE_HOME/network/trace
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_MASK=(106)
    LOG_DIRECTORY_CLIENT
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Example: LOG_DIRECTORY_CLIENT=/oracle/network/trace
    LOG_DIRECTORY_SERVER
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Valid in File: SQLNET.ORA
    Example: LOG_DIRECTORY_SERVER=/oracle/network/trace
    LOG_FILE_CLIENT
    Purpose: Controls the log output filename for an Oracle client.
    Default Value: SQLNET.LOG
    Example: LOG_FILE_CLIENT=client
    LOG_FILE_SERVER
    Purpose: Controls the log output filename for an Oracle server.
    Default Value: SQLNET.LOG
    Example: LOG_FILE_SERVER=svr
    NAMESCTL.TRACE_LEVEL
    Purpose: Indicates the level at which the NAMESCTL program should
    be traced.
    Default Value: OFF
    Values: OFF, USER, or ADMIN
    Example: NAMESCTL.TRACE_LEVEL=ADMIN
    NAMESCTL.TRACE_FILE
    Purpose: Indicates the file in which the NAMESCTL trace output is
    placed.
    Default Value: namesctl_PID.cdf and namesctl_PID.dat
    Example: NAMESCTL.TRACE_FILE=NMSCTL
    NAMESCTL.TRACE_DIRECTORY
    Purpose: Indicates the directory where trace output from the NAMESCTL
    utility is placed.
    Default
    Value: $ORACLE_HOME/network/trace
    Example: NAMESCTL.TRACE_DIRECTORY=/ORACLE/TRACE
    NAMESCTL.TRACE_UNIQUE
    Indicates whether a process identifier is appended to the
    Purpose: name of each trace file generated, so that several can
    coexist.
    Default
    Value: OFF
    Values: OFF or ON
    Example: NAMESCTL.TRACE_UNIQUE = ON
    TNSPING.TRACE_DIRECTORY
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TNSPING.TRACE_DIRECTORY=/oracle/traces
    TNSPING.TRACE_LEVEL
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TNSPING.TRACE_LEVEL=10
    TRACE_DIRECTORY_CLIENT
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_CLIENT=/oracle/traces
    TRACE_DIRECTORY_SERVER
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_SERVER=/oracle/traces
    TRACE_FILE_CLIENT
    Purpose: Controls the name of the client trace file
    Default Value: SQLNET.CDF and SQLNET.DAT
    Example: TRACE_FILE_CLIENT=cli
    TRACE_FILE_SERVER
    Purpose: Controls the name of the server trace file
    Default Value: SVR_PID.CDF and SVR_PID.DAT
    Example: TRACE_FILE_SERVER=svr
    TRACE_LEVEL_CLIENT
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TRACE_LEVEL_CLIENT=10
    TRACE_LEVEL_SERVER
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TRACE_LEVEL_SERVER=10
    TRACE_UNIQUE_CLIENT
    Used to make each client trace file have a unique name to
    Purpose: prevent each trace file from being overwritten with the next
    occurrence of the client. The PID is attached to the end of
    the filename.
    Default
    Value: OFF
    Example: TRACE_UNIQUE_CLIENT=ON
    USE_CMAN
    If the session is in an Enhanced Discovery Network with a
    Purpose: Names Server, this parameter forces all sessions to go
    through a Connection Manager to get to the server.
    Default
    Value: FALSE
    Values: TRUE or FALSE
    Example: USE_CMAN=TRUE
    ==========================================================================
    || LISTENER.ORA Parameters ||
    ==========================================================================
    LOG_DIRECTORY_listener_name
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Example: LOG_DIRECTORY_LISTENER=/oracle/traces
    LOG_FILE_listener_name
    Purpose: Specifies the filename where the log information is
    written
    Default Value: listener_name.log
    Example: LOG_FILE_LISTENER=lsnr
    TRACE_DIRECTORY_listener_name
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_LISTENER=/oracle/traces
    TRACE_FILE_listener_name
    Purpose: Controls the name of the listener trace file
    Default Value: LISTENER_NAME.CDF and LISTENER_NAME.DAT
    Example: TRACE_FILE_LISTENER=lsnr
    TRACE_LEVEL_listener_name
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 - WorldWide Customer Support trace information
    Example: TRACE_LEVEL_LISTENER=10
    ==========================================================================
    || NAMES.ORA Parameters ||
    ==========================================================================
    NAMES.TRACE_DIRECTORY
    Purpose: Indicates the name of the directory to which trace files
    from a Names Server trace session are written.
    Default
    Value: platform specific
    Example: names.trace_directory = complete_directory_name
    NAMES.TRACE_FILE
    Purpose: Indicates the name of the output file from a Names Server
    trace session. The filename extension is always.trc
    Default
    Value: names
    Example: names.trace_file = filename
    NAMES.TRACE_LEVEL
    Purpose: Indicates the level at which the Names Server is to be
    traced.
    Default Value: OFF
    Example: names.trace_level = OFF
    NAMES.TRACE_UNIQUE
    indicates whether each trace file has a unique name, allowing
    Purpose: multiple trace files to coexist. If the value is set to ON, a
    process identifier is appended to the name of each trace file
    generated.
    Default
    Value: OFF
    Example: names.trace_unique = ON
    names.trace_file = names_05.trc
    ==========================================================================
    CMAN.ORA Parameters
    ==========================================================================
    TRACING
    Default
    Value: NO
    Example: TRACING = NO
    References
    7. listener.log 파일에 loggin정보를 남기지 않게 하는 방법이 있나요 ?
    고객이 개발하여 사용중인 application에서 NET8을 이용하여 접속하거나 접속을
    종료하는 경우 listener.log에 이와 관련된 정보가 남으며, 수 많은 사용자가
    접속을 하게 되므로서 급속하게 listener.log 파일이 커져 $ORACLE_HOME이 있는
    file system이 꽉 차서 데이터베이스가 hang이 되는 결과를 초래하는 경우가 있다.
    고객들은 listener.log에 write할수 있는 메세지의 양에 제한을 두기를 원하는
    경우가 있으나 이러한 기능은 제공되지 않는다. 하지만 listener의 logging은
    ON 또는 OFF는 설정을 통해서 가능하다.
    Net8에서는 listener.ora에 "LOGGING_(the listener name)=off"를 설정하게
    되면 listener의 logging을 멈출 수 있다.
    물론 설정후 listener stop후 재기동을 하셔야 변경된 paramerter에 의해
    이 기능이 enable됩니다.
    참고 : SQL*NET 2.3.x 에서도 이 parameter가 유효한가요 ?
    물론 사용이 가능합니다. NET8에서 사용하는 것과 동일하게 parameter를
    listener.ora에 설정함으로서 가능합니다.
    EX)
    LOGGING_LISTENER=OFF
    이 parameter는 listener의 전체 logging을 disable하는 parameter로 일부만
    여과하여 logging할 수 있는 기능은 아니다.
    이 parameter는 NET8에 알려진 parameter로 SQL*NET 2.3.x manuals에 나와
    있지는 않지만 정상적으로 사용할 수 있다.

Maybe you are looking for

  • Best way to move redo log from one disk group to another in ASM?

    Hi All, Our db is 10.2.0.3 RAC db. And database servers are window 2003 server. We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group

  • Script to show latest updated content

    Do you know of a program, service, or scipt that will do this... I want to have a "what's new" page that is automatically updated..every time someone puts up something new on the site, the page will list it...it will list like the last ten (or howeve

  • Host command in forms 6i

    Hi, I am using host command to execute the sqlloader in windows environementand it is working fine. I want to display the dos prompt to the user while sqlloader operations is going on and to termiate after the task is completed. Please let me know ho

  • Access Denied when accessing ipc$ but not admin$ of a Windows 2008 R2 Standard server

    From a Windows 2008 R2 Server, c:\> net use * \\<winserver2008>\ipc$  System error 66 has occurred. The network resource type is not correct. c:\> net use * \\<winserver2008>\admin$  Drive Z: is now connected to \\<winserver2008\admin$. However, runn

  • Slide Show images gallery XML

    Hi, I need to do a slide show with a XML File, the XML document is for save the path of the images, but I don't know how to load the Images in the Adobe AIR Application for create a Slide show or what kind of control use to do the slide show. If some