Testing SQL Queries (QA Tool)

Hello All,
Today i heared a project about testing SQL queries.
I am allways testing queries manually. By going through the tables, where clause etc.
I there any QA tool to test SQL queries.
If the queries are application based, then how can we test?
Kindly reply if there is one, so that i can join the project.
Thanks&Regards,
Kannan Balasubramanian

If by application-based means a front-end tool to query the database, then you put a trace monitor on that user or group of users. Most QA tools are built specifically for an ongoing application development, I don't know of any out-of-the-box, I just test every single thing I design.
TimS

Similar Messages

  • Want to write sql queries online

    Hi,
    I want to write and test sql queries online. Because oracle database is too large I do not want to install it on my system. Is there any virtual environment where I can learn sql queries and test without installing it on my local machine?

    SQL Fiddle apparently offers a test bed for various databases including Oracle 11gR2.
    I haven't used it myself but it was highly recommended by SQL Indexing Tutorial | Use The Index, Luke! which is a good SQL performance tutorial.

  • How to test basic sql queries online(for Sql Server 2012 and up versions) ?

    Hi,
    I need to test basic sql queries using sql server without installing Sql Server on to the system.
    Do we have such readymade online help, which can solve such difficulties ?
    Please sugest some of url's.
    Thanks.

    Thanks for reply.
    I am looking specific Sql Server.
    Hi Maggy,
    I strongly recommend you install SQL Server 2012 express edit and test your T-SQL queries. We can download it from the following site:
    Microsoft® SQL Server® 2012 Express:
    http://www.microsoft.com/en-hk/download/details.aspx?id=29062
    Regards,
    Elvis Long
    TechNet Community Support

  • Developing portlets for dummies (sql queries)

    Hello, I've been trying to build a dynamic menu. First I went with just plain old plsql: i created a function in the portal schema that returns an unordered, nested list of the pages in my pagegroup and called that function in a regular pl/sql item on my portal page. I did this by querying the wwpob_page$ table and that went just great in my test&development setup (of which I am the admin of course :))
    Then I realized that since I'm not an administrator of the server hosting our portal and I only have very limited privileges (I am only a page group administrator) I will probably not be allowed to utilize this function nor will they agree to install it in the portal schema, and I decided I should build a portlet that does the same thing (so it can be registered and so on, and so it can use the synonyms and tools that are available to registered providers). There already is such a portlet (and provider) registered for use in the target portal, but I don't like it because i uses tables and hard-coded styles so I will cook my own, better version. :)
    So I downloaded an example portlet and am getting the hang of it, but now I just can't for the life of me figure out how to enable any sql-queries. I have run the provsyns-.sql script, logged in as test_provider/portal and installed my portlet-package as test_provider user. I can see the available pl/sql packages in Toad, but there are no tables or views for me to to see. That means I can't query the portal tables that I need to.
    edit: ok, stupid user error; I suck at using Toad, so I was looking at the wrong schema altogether :D So now i see the public views and packages, so forget that bit of the question.
    But still, I cannot see even wwsbr_all_folders -view, much less the wwpob_page$ -table. I cannot see any way to find the pages that are in my page group. Somehow it must be doable, right?
    Have I done something wrong, missed a step in enabling my test-provider / schema perhaps? I don't really know what I'm doing, but I followed instructions here: http://home.c2i.net/toreingolf/oracle/portal/my_first_plsql_portlet.htm (excellent instructions, thanks!)
    So should my portlet be able to access those tables or not? How the heck has the third partly portlet maker done it?
    i'm on OracleAS Portal 10g Release 2 (10.1.4)
    Edited by: Baguette on 23-Apr-2009 05:13
    Edited by: Baguette on 23-Apr-2009 05:32

    i see your perspective now. and let me give a perspective to my first reply too.
    what i proposed to you was the answer of what i quoted in the message. that is, why didn't you see those views in the new schema you created! and it is still ok but it is done in the portal schema for which you should have privielges too and i assumed you had. my mistake!
    now, i can relate your privacy concerns with your earlier message:
    Hello, I've been trying to build a dynamic menu. First I went with just plain old plsql: i created a function in the portal schema that returns an unordered, nested list of the pages in my pagegroup and called that function in a regular pl/sql item on my portal page. I did this by querying the wwpob_page$ table and that went just great in my test&development setup (of which I am the admin of course :))
    +Then I realized that since I'm not an administrator of the server hosting our portal and I only have very limited privileges (I am only a page group administrator) I will probably not be allowed to utilize this function nor will they agree to install it in the portal schema, and I decided I should build a portlet that does the same thing (so it can be registered and so on, and so it can use the synonyms and tools that are available to registered providers). There already is such a portlet (and provider) registered for use in the target portal, but I don't like it because i uses tables and hard-coded styles so I will cook my own, better version. :)+
    +So I downloaded an example portlet and am getting the hang of it, but now I just can't for the life of me figure out how to enable any sql-queries. I have run the provsyns-.sql script, logged in as test_provider/portal and installed my portlet-package as test_provider user. I can see the available pl/sql packages in Toad, but there are no tables or views for me to to see. That means I can't query the portal tables that I need to.+
    - by downloading an example portlet, you probably mean you created a new schema. because provsyns work on a schema.
    - if you are not the administrator of the portal, and may not be able to access some portions of the portal, it means that you do not use the portal user (the user which serves as the owner of the portal schema).
    - now, your plan to create a new schema and give those privielges would still not work. because, by creating a new schema you cannot sneak in to the oriignal portal schema if you do not have privileges to do it. obvious, right? otherwise, it would be a vulnerability of the software that you can see what you are not allowed to see by creating a new schema.
    - however, there is a bright side here. the views give records based on your privileges.
    - so if your administrators have generated them already or if they generate on the original portal schema, then you may see the pages and items that you have privileges to see and no more.
    so now, you may ask the administrators if they have already done it, and if not, then if they would be willing to do it.
    hope that helps!
    AMN

  • How to test SQL query performance - realiably?

    I have certain queries and I want to test which one is faster, and how big is the difference.
    How can I do this reliably?
    The problem is, when I execute the queries, Oracle does it's caching and execution planning and whatnot, and results of the queries are dependent on the order I execute them.
    Example: query A and query B, supposed to return same data.
    query A, run 1: 587 seconds
    query A, run 2: 509 seconds
    query B, run 1: 474 seconds
    query B, run 2: 451 seconds
    It would seem that A is somewhat faster than B, but if I change the order and execute B before A, results are different.
    Also I'm running the queries in SQL Developer, and it only returns 100 first lines, how can I remove this effect and simulate real scenario where all lines are fetched?
    I can also use EXPLAIN PLANs and look at the costs but I'm not sure how much I can trust those either. I understand they are only estimations and even if cost(a) = 1.5 * cost (b), b could still end up executing faster in practise due to inaccuracies in the cost calculation....right? EDIT: actually event if cost(a) = 5000 * cost(b), b can still execute faster.....seems like query A's cost is 15836 and B's cost is 3 while A seems to be faster in practise.
    Edited by: user620914 on 19-Jan-2010 01:42

    user620914 wrote:
    I have to say I don't understand your point either :)
    What are you saying, that people should not test their SQL performance? That tools such as autotrace are useless?No.. what I'm saying is that you need a baseline to make an informed decision about SQL performance.
    What does a 4 second SQL performance mean for query foo ? Nothing really.. wearing my dba cap I would point at that this is actually utterly useless for me to determine the impact of your query on production, or use it to determine how to scale it.
    If instead you tell me that is hits that table using an index range scan.. I know what it is doing and have a far better idea what it will do to the production instance.
    Thus my questioning this "+elapsed time+" measurement approach. I as a dba cannot use it... and I'm not sure what benefit (wearing my developer hat) you will find from it either.
    You can form your SQL queries better or worse, or select your table structure / indexes better or worse. Some choices may end up executing orders of magnitude slower than others. Obviously you can't get exact measurements "this query executes in 43123 ns" and there are a lot of unpredictable variables that affect the end performance. Still, it's often better to test your querie's / table's performance before implementing them in the application than not.Exactly. I'm not questioning the fact that optimising your code (and ALL your code, not just SQL) is a Good Thing (tm) - but how you go about that optimisation process.
    For example, your PL/SQL code fires off a query. It returns on average 10,000 rows, hits a single partition (SQL enables partitioning pruning) and then uses a local bitmap index to identify the rows.
    An optimal query by the sounds of it, and one that will perform and scale well.. even when the database instance needs to service a 100 clients using your code and running this query.
    Only, the code does a single bulk collect of all the rows and stuff it into dedicated process memory (PGA). Servicing a 100 clients means that dedicated server memory is now needed for 100x10000 rows - there's insufficient free memory, causing the kernel to start swapping pages in and out of memory heavily as all 100 client sessions are active and wanting to process the rows returned by the optimal query.
    What happens to scalability and performance now?
    Testing for performance is not simply measuring a query and then trying to use that or extrapolate that to determine application performance and the impact on production.
    It starts with the design of the tables, the design of the application, the writing of the code (application and SQL). It is not something that should be done after the fact as in "+okay, application all done, let's see how she performs!+".. and especially not using time as the baseline for performance measurement.

  • How to get the SQL queries based on SQL_ID.

    Hi Experts,
    I want to get the SQL queries based on SQL_ID.
    I have tried the following query,but I am not getting full query.
    [code]SET linesize 132 pagesize 999
    column sql_fulltext format a60 word_wrap
    break on sql_text skip 1
    SELECT   REPLACE (TRANSLATE (sql_text, '0123456789', '999999999'), '9', ''),sql_id
    FROM   dba_hist_sqltext s
    WHERE   s.sql_id = '7tvurftg8zryb';[/code]
    One of my friend said use grid to get full query text.
    Can you please help me how to use grid ,else any other method to get the full query based on SQL_ID.
    Please help me.
    Thanks in advance.

    You have these many options to set, if sql_text is really huge. But better use a tool(TOAD) as it's really helpful and easy to use instead! (See my previous comment).
    column sql_text format A10000
    set echo off
    set head off
    set feed off
    set verify off
    set termout off
    set lines 10000
    set long 1000000
    set trimspool on
    set pages 0
    Thanks!

  • SQL Server Data Tools – Business Intelligence for Visual Studio 2013 (SSDT BI) and SSIS on SQL Server 2012

    This great blog entry titled
    SQL Server Data Tools - Business Intelligence for Visual Studio 2013 (SSDT BI), link below, highlights the lack of support for SSIS projects using SQL Server 2012, VS 2013 and SSDT BI for VS 2013. I see there is a new version
    on SSDT BI for VS 2013 (12.0.2430.0, File Name: SSDTBI_x86_ENU.exe, Date Published: 10/27/2014) link below.
    Does this version support SSIS projects using SQL Server 2012 using VS 2013 and SSDT BI for VS 2013?
    http://blogs.msdn.com/b/analysisservices/archive/2014/04/02/sql-server-data-tools-business-intelligence-for-visual-studio-2013-ssdt-bi.aspx
    http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Hi cjrinpdx,
    According to the picture, it seems that we can use SSIS 2012 support is not included in SSDT-BI for VS 2013. And based on the previous versions, SSIS always different from SSRS, SSAS.
    SSDT-BI for Visual Studio 2012 supports versions of SQL Server as follows:
    SSAS project can target SQL 2012 or lower
    SSRS project can target SQL 2012 or lower
    SSIS project can target only SQL 2012
    The following blog is for your reference:
    http://blogs.technet.com/b/dataplatforminsider/archive/2013/11/13/microsoft-sql-server-data-tools-update.aspx
    Since I have no environment to test this at this moment,
    I recommend you that submit the feedback at
    https://connect.microsoft.com/SQLServer/. 
    Thank you for your understanding.
    Regards,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • How to write a JAVA program to execute the SQL queries

    I have a database in the Microsoft Access queries and I need to execute the query by some how write the Java program to make it execute the query. because I need to get the different of time so I know how fast each query run.
    Thank you

    You need jdbc-driver for MSAccess for run this example:
    JDBCClient.java
    import java.util.Properties;
    import java.lang.String;
    import java.sql.DriverManager;
    import java.sql.Connection;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.ResultSetMetaData;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.BufferedOutputStream;
    import java.lang.System;
    import java.lang.Class;
    import java.sql.SQLException;
    import java.util.Date;
    import java.util.Locale;
    import java.text.DateFormat;
    import java.sql.Time;
    public class JDBCClient{
        private String DriverName = new String();
        private String DatabaseURL = new String();
        private String UserName = new String();
        private String Password = new String();
        private String SQLFile = new String();
        private String OutputFile = new String();
        private String Separator = new String();
        private boolean NeedColumnHeaders;
        private String EncodingParamName = new String();
        private String EncodingValue = new String();
        private String OutputEncoding = new String();
        private boolean AfterLastColumnSeparator;
        JDBCClient( String propfilename){
         System.out.println( "Initializing...");
         Properties properties = new Properties();
         try{
             properties.load( new FileInputStream( propfilename));
         catch( Exception e){
                 System.out.println( "Error: " + e.toString());
             System.exit( 0);
         DriverName = properties.getProperty( "DriverName");
         DatabaseURL = properties.getProperty( "DatabaseURL");
         UserName = properties.getProperty( "UserName");
         Password = properties.getProperty( "Password");
         SQLFile = properties.getProperty( "SQLFile");
         OutputFile = properties.getProperty( "OutputFile");
         Separator = properties.getProperty( "Separator");
         if( properties.getProperty( "NeedColumnHeaders").compareToIgnoreCase( "yes") == 0 ||
             properties.getProperty( "NeedColumnHeaders").compareToIgnoreCase( "true") == 0)
             NeedColumnHeaders = true;
         else
         if( properties.getProperty( "NeedColumnHeaders").compareToIgnoreCase( "no") == 0 ||
             properties.getProperty( "NeedColumnHeaders").compareToIgnoreCase( "false") == 0)
             NeedColumnHeaders = false;
         else{
                 System.out.println( "Invalid value for \"NeedColumnHeaders\" property (logical expected)");
             System.exit( 0);
         EncodingParamName = properties.getProperty( "EncodingParamName");
         EncodingValue = properties.getProperty( "EncodingValue");
         OutputEncoding = properties.getProperty( "OutputEncoding");
         if( properties.getProperty( "AfterLastColumnSeparator").compareToIgnoreCase( "yes") == 0 ||
             properties.getProperty( "AfterLastColumnSeparator").compareToIgnoreCase( "true") == 0)
             AfterLastColumnSeparator = true;
         else
         if( properties.getProperty( "AfterLastColumnSeparator").compareToIgnoreCase( "no") == 0 ||
             properties.getProperty( "AfterLastColumnSeparator").compareToIgnoreCase( "false") == 0)
             AfterLastColumnSeparator = false;
         else{
                 System.out.println( "Invalid value for \"AfterLastColumnSeparator\" property (logical expected)");
             System.exit( 0);
         try{
             byte[] EOL = new byte[2];
             EOL[0] = 13;
             EOL[1] = 10;
             Class.forName( DriverName);
             Properties connInfo = new Properties();
             connInfo.put( "user", UserName);
             connInfo.put( "password", Password);
             if( EncodingParamName.length() != 0 && EncodingValue.length() != 0)
              connInfo.put( EncodingParamName, EncodingValue);
                 Connection connection = DriverManager.getConnection( DatabaseURL, connInfo);
             FileInputStream in = new FileInputStream( SQLFile);
             byte[] buffer = new byte[in.available()];
             in.read( buffer);
             in.close();
             String SQL = new String( buffer);
                 PreparedStatement statement = connection.prepareStatement( SQL);
             Date d1 = new Date();
                 System.out.println( "Database connected at " + d1 + " Executing statement...");
                ResultSet resultSet = null;
             if( statement.execute())
              resultSet = statement.getResultSet();
             else{
                  System.out.println( "Script updates " + statement.getUpdateCount() + " records.");
              System.exit( 0);
                 ResultSetMetaData metaData = resultSet.getMetaData();
                 BufferedOutputStream out = new BufferedOutputStream( new FileOutputStream( OutputFile));
             if( NeedColumnHeaders && metaData.getColumnCount() > 0){
              String head = new String();
              for( int i = 1; i < metaData.getColumnCount(); i++)
                  head += metaData.getColumnName( i) + Separator;
              head += metaData.getColumnName( metaData.getColumnCount());
              out.write( head.getBytes( OutputEncoding), 0, head.length());
              out.write( EOL, 0, 2);
             String record = new String();
             while( resultSet.next()){
              record = "";
              for( int i = 1; i < metaData.getColumnCount(); i++)
                  record += resultSet.getString( i) + Separator;
              record += resultSet.getString( metaData.getColumnCount());
              if( AfterLastColumnSeparator)
                  record += Separator;
              out.write( record.getBytes( OutputEncoding), 0, record.length());
              out.write( EOL, 0, 2);
             out.close();
             Date d2 = new Date();
                 System.out.println( "Done at " + d2);
                 System.out.println( "Executing time " + new Time( d2.getTime() - d1.getTime() - 10800000));
         catch( ClassNotFoundException e){
                 System.out.println( e.toString());
         catch( SQLException e){
                 System.out.println( e.toString());
         catch( java.io.IOException e){
                 System.out.println( e.toString());
         System.exit( 0);
        public static void main( String args[]){
         if( args.length == 1)
             new JDBCClient( args[0]);
         else
             System.out.println( "Usage JDBCClient <properties_file>");
    }JDBCClient.properties ( for Oracle database)
    DriverName=oracle.jdbc.driver.OracleDriver
    DatabaseURL=jdbc:oracle:thin:@192.168.1.1:1521:test
    UserName=test
    Password=test
    SQLFile=test.sql
    OutputFile=test.csv
    Separator=*
    NeedColumnHeaders=yes
    EncodingParamName=
    EncodingValue=
    OutputEncoding=windows-1251
    AfterLastColumnSeparator=yestest.sql
    select * from users;

  • Sql queries - slow database

    hi all,
    is there a general but specific way to generate a list of sql queries that are consuming most resources and resulting in slow database performance?
    thanks.

    There are very few ways in this world that are "general but specific".
    You can use "general" tools like StatsPack and AWR.
    You can write "general" queries on V$SQL, V$SQLAREA, V$SQLSTATS.
    You can use "specific" methods like Tracing.
    You can use "specific" methods like Client side (or Application Server side) Logs.

  • SQL Queries in Code V/s Stored Procedures

    Hi Friend,
    Can any one of you guide me with following..
    What is faster ? using SQL Queries in Java code or using Stored Procedures which are called from code?
    I understnd Stroed Procedures are faster and definitely it provides more maintainability.
    If any one can give me any links or resources which outlines pros and cons of using Stored Proc and SQL Queries embedded in Java than it would be a great help..
    If there are any articles which proves either of the above is a preferred way, that would also help..
    Appreciate the effort in advance !!
    Thanks
    Gurudatt

    Well one benefit of Stored Procs is that you "compile" it on the database whereas you might build your query on the fly in java... test coverage is important in order not to have such things as a typo in a column name....
    Still, if you change a table, you have to go through all the procedures in SQL and likely to do so in some of your business object... and trust me, that can be hell!
    It all depends on the use of the app...
    From my experience, Stored procedure are much faster than built-on-the-fly SQL (and quite faster than prepared statements depending on the JDBC driver, the re-use of connections etc...)....
    IMHO, you'd probably be wise to start of with prepared statements, and when the schema seems stable enough (ie unlikely to change), look for the slower queries and convert them to stroed procedures.
    If you don't have to support several databases and are tight on performance, you can even include some logic in your stored procedure (e.g. update several tables, based on various selects...etc...). The language is usually quite powerful, and that can save you the run-time of selecting, converting to object , process and update (i.e. several roud-trip between DB and app)...
    Tshcuss!
    Chris

  • Which SQL queries to count on OFO 9042 & Content Services 10120

    Dear experts,
    In customer context we urgently need to know which SQL queries to use :
    - to count on OFO 9.0.4.2 the number of workspaces
    - to count on OFO 9.0.4.2 the number of metadata elements (categories & attributes) attached to documents
    - to count on Content Services 10.1.2.0 the number of containers
    - to count on Content Services 10.1.2.0 the number of libraries
    - to count on Content Services 10.1.2.0 the number of folders
    - to count on Content Services 10.1.2.0 the number of groups
    - to count on Content Services 10.1.2.0 the number of categories
    - to count on Content Services 10.1.2.0 the number of attributes
    - to count on Content Services 10.1.2.0 the number of workspaces
    Thank a lot for your kind answer.
    Best regards.
    Jean-Francois

    Congrats to Shanky and Durval!
     SQL Server General and Database Engine Technical Guru - June 2014  
    Shanky
    SQL Server: What does Column Compressed Page Count Value Signify
    in DMV Sys.dm_db_index_physical_stats ?
    DB: "Interesting and detailed"
    DRC: "• This is a good article and provides details of each and every step and the output with explanation. Very well formed and great information. • We can modify the create table query with “DEFAULT VALUES". CREATE TABLE [dbo].[INDEXCOMPRESSION](
    [C1] [int] IDENTITY(1,1) NOT NULL, [C2] [char](50) NULL DEFAULT 'DEFAULT TEST DATA' ) ON [PRIMARY]"
    GO: "Very informative and well formed article as Said says.. Thanks for that great ressource. "
    Durval Ramos
    How to get row counts for all Tables
    GO: "As usual Durva has one of the best articles about SQL Server General and Database Engine articles! Thanks, buddy!" "
    Jinchun Chen: "Another great tip!"
    PT: "Nice tip" 
    Ed Price: "Good topic, formatting, and use of images. This would be far better if the examples didn't require the black bars in the images. So it would be better to scrub the data before taking the screenshots. Still a good article. Thank you!"
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • View SQL queries during runtime

    In the Jdev tutorials I noticed the running tab displaying the sql queries that are being run in the application. How can I turn this on? Using 10.1.3.0.4
    thanks

    Please see sections "5.5.3.2 Enabling ADF Business Components Debug Diagnostics" , "8.3.4 Debugging the Application Module Using the Business Components Tester", and "Chapter 24 Testing and Debugging Web Applications" of the ADF Developer's Guide for Forms/4GL Developers on the ADF Learning Center at http://www.oracle.com/technology/products/adf/learnadf.html for more useful information on this.

  • Generating XML from SQL queries and saving to a xml file?

    Hi there,
    I was wondering if somebody could help with regards to the following:
    Generating XML from SQL queries and saving to a xml file?
    We want to have a stored procedure(PL/SQL) that accepts an order number as an input parameter(the procedure
    is accessed by our software on the client machine).
    Using this order number we do a couple of SQL queries.
    My first question: What would be our best option to convert the result of the
    queries to xml?
    Second Question: Once the XML has been generated, how do we save that XML to a file?
    (The XML file is going to be saved on the file system of the server that
    the database is running on.)
    Now our procedure will also have a output parameter which returns the filename to us. eg. Order1001.xml
    Our software on the client machine will then ftp this XML file(based on the output parameter[filename]) to
    the client hard drive.
    Any information would be greatly appreciated.
    Thanking you,
    Francois

    Hi
    Here is an example of some code that i am using on Oracle 817.
    The create_file procedure is the one that creates the file.
    The orher procedures are utility procedures that can be used with any XML file.
    PROCEDURE create_file_with_root(po_xmldoc OUT xmldom.DOMDocument,
    pi_root_tag IN VARCHAR2,
                                            po_root_element OUT xmldom.domelement,
                                            po_root_node OUT xmldom.domnode,
                                            pi_doctype_url IN VARCHAR2) IS
    xmldoc xmldom.DOMDocument;
    root xmldom.domnode;
    root_node xmldom.domnode;
    root_element xmldom.domelement;
    record_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    BEGIN
    xmldoc := xmldom.newDOMDocument;
    xmldom.setVersion(xmldoc, '1.0');
    xmldom.setDoctype(xmldoc, pi_root_tag, pi_doctype_url,'');
    -- Create the root --
    root := xmldom.makeNode(xmldoc);
    -- Create the root element in the file --
    create_element_and_append(xmldoc, pi_root_tag, root, root_element, root_node);
    po_xmldoc := xmldoc;
    po_root_node := root_node;
    po_root_element := root_element;
    END create_file_with_root;
    PROCEDURE create_element_and_append(pi_xmldoc IN OUT xmldom.DOMDocument,
    pi_element_name IN VARCHAR2,
                                            pi_parent_node IN xmldom.domnode,
                                            po_new_element OUT xmldom.domelement,
                                            po_new_node OUT xmldom.domnode) IS
    element xmldom.domelement;
    child_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    BEGIN
    element := xmldom.createElement(pi_xmldoc, pi_element_name);
    child_node := xmldom.makeNode(element);
    -- Append the new node to the parent --
    newelenode := xmldom.appendchild(pi_parent_node, child_node);
    po_new_node := child_node;
    po_new_element := element;
    END create_element_and_append;
    FUNCTION create_text_element(pio_xmldoc IN OUT xmldom.DOMDocument, pi_element_name IN VARCHAR2,
    pi_element_data IN VARCHAR2, pi_parent_node IN xmldom.domnode) RETURN xmldom.domnode IS
    parent_node xmldom.domnode;                                   
    child_node xmldom.domnode;
    child_element xmldom.domelement;
    newelenode xmldom.DOMNode;
    textele xmldom.DOMText;
    compnode xmldom.DOMNode;
    BEGIN
    create_element_and_append(pio_xmldoc, pi_element_name, pi_parent_node, child_element, child_node);
    parent_node := child_node;
    -- Create a text node --
    textele := xmldom.createTextNode(pio_xmldoc, pi_element_data);
    child_node := xmldom.makeNode(textele);
    -- Link the text node to the new node --
    compnode := xmldom.appendChild(parent_node, child_node);
    RETURN newelenode;
    END create_text_element;
    PROCEDURE create_file IS
    xmldoc xmldom.DOMDocument;
    root_node xmldom.domnode;
    xml_doctype xmldom.DOMDocumentType;
    root_element xmldom.domelement;
    record_element xmldom.domelement;
    record_node xmldom.domnode;
    parent_node xmldom.domnode;
    child_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    textele xmldom.DOMText;
    compnode xmldom.DOMNode;
    BEGIN
    xmldoc := xmldom.newDOMDocument;
    xmldom.setVersion(xmldoc, '1.0');
    create_file_with_root(xmldoc, 'root', root_element, root_node, 'test.dtd');
    xmldom.setAttribute(root_element, 'interface_type', 'EXCHANGE_RATES');
    -- Create the record element in the file --
    create_element_and_append(xmldoc, 'record', root_node, record_element, record_node);
    parent_node := create_text_element(xmldoc, 'title', 'Mr', record_node);
    parent_node := create_text_element(xmldoc, 'name', 'Joe', record_node);
    parent_node := create_text_element(xmldoc,'surname', 'Blogs', record_node);
    -- Create the record element in the file --
    create_element_and_append(xmldoc, 'record', root_node, record_element, record_node);
    parent_node := create_text_element(xmldoc, 'title', 'Mrs', record_node);
    parent_node := create_text_element(xmldoc, 'name', 'A', record_node);
    parent_node := create_text_element(xmldoc, 'surname', 'B', record_node);
    -- write the newly created dom document into the buffer assuming it is less than 32K
    xmldom.writeTofile(xmldoc, 'c:\laiki\willow_data\test.xml');
    EXCEPTION
    WHEN xmldom.INDEX_SIZE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Index Size error');
    WHEN xmldom.DOMSTRING_SIZE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'String Size error');
    WHEN xmldom.HIERARCHY_REQUEST_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Hierarchy request error');
    WHEN xmldom.WRONG_DOCUMENT_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Wrong doc error');
    WHEN xmldom.INVALID_CHARACTER_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Invalid Char error');
    WHEN xmldom.NO_DATA_ALLOWED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Nod data allowed error');
    WHEN xmldom.NO_MODIFICATION_ALLOWED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'No mod allowed error');
    WHEN xmldom.NOT_FOUND_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Not found error');
    WHEN xmldom.NOT_SUPPORTED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Not supported error');
    WHEN xmldom.INUSE_ATTRIBUTE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'In use attr error');
    WHEN OTHERS THEN
    dbms_output.put_line('exception occured' || SQLCODE || SUBSTR(SQLERRM, 1, 100));
    END create_file;

  • Test your queries online

    Test your queries online at
    http://www.mondrusccs.com/sql+.asp it is free!!!

    Just for fun I tested out ALL the USB drives I have-(Couldn't find a few of my 1gb microSD cards though. Buggers are a little too small..)  1- SDHC Card 4GB Sandisk  (Internal Card reader)5.18MB/s Write --9.56MB/s Read   (200MB only) 2- USB Thumbdrive 2GB PNY “Attache”7.15MB/s Write --   14.9MB/s Read   (200MB only) 3- USB Harddrive (removed from laptop) 40GB  18.5MB/s Write --   21.6MB/s Read   (200MB only) 4- Sansa e260 4GB4.5MB/s Write --   5.08MB/s Read   (200MB only) 5a- MicroSD Card 2GB PNY (Sansa e260)3.58MB/s Write --   5.42MB/s Read   (200MB only) 5b- MicroSD Card 2GB PNY (Sandisk MicroSD- Transflash adapter, internal SD reader)4.16MB/s Write --   15.8MB/s Read   (200MB only) 6a- MicroSD Card 2GB Sandisk (Sansa e260)3.84MB/s Write --   4.93MB/s Read   (200MB only) 6b- MicroSD Card 2GB Sandisk (Sandisk MicroSD- Transflash adapter, internal SD reader)6.38MB/s Write --   17.4MB/s Read   (200MB only) 7- USB Thumbdrive 32MB Kingston.788MB/s Write --   .840MB/s Read   (12MB only) 8-  USB Thumbdrive 4GB Sandisk “Cruzer Micro”5.23MB/s Write --   22.5MB/s Read   (200MB only)

  • Long Running SQL Queries

    We have a customer that runs our Crystal Reports and they have complained that some of the reports cause long running sql queries and they have to kill these queries manually from SQL Management tools. We have changed the code so that we now dispose the report source and viewer objects as we call .dispose() function on them;
    reportSource.dispose();
    viewer.dispose();
    At first this seemed work but customer later complained that the issue still occurs. Can anyone help with why some of these CR queries are still running way after report is generated (with correct set of data), and based on the customer some of them running for more than 2 hours? What else can we do to make sure that all CR related queries cease to exist once report is generated? Appreciate all the help.

    1. Run the report from with Crystal designer. You should see the query being sent to DB server. After the report is viewed and you close it in designer, do you see the DB connection being dropped? If not, this may not be a SDK\ API related issue.
    2. Try using latest set of CRJ Jars [here|http://downloads.businessobjects.com/akdlm/crystalreportsforeclipse/2_0/crjava-runtime_12.2.213.zip] in your application. Also update the crystalreportviewers folder with the new viewer found at this link.
    3. Make sure that you are calling reportClientDocument.close() at the end when user is done viewing the report.
    4. When you say logn running queries are seen in DB, are you referring to a DB connection left open? or is it actually running any query and returning results?

Maybe you are looking for