What is Level-1 in recursive queries

Hi can some one explain to me what level-1 means in recursive queries and direct me to some good documentation on it as well
select
lpad(' ', level-1) || node_from || '->' || node_to
from
directed_graph
start with
node_from = 'F'
connect by
prior node_to = node_from;

LEVEL is a pseudo-column. That means that you get that as one of the columns whenever you run recursive SQL in Oracle, even though your input tables don't contain that column. Other such columns are provided to detect leaf-level nodes, cycles etc.
The lpad function is used to provide an indentation that makes it convenient to read. Whatever is returned is padded using the space character level-1 times. the minus-one is so that the top level does not have any indentation at all. Try the query with and without it to see what it means.
If you are running it from SQLPLUS, you might want to name the column, then format it so that you can see it nicely.
By the way, in an arbitrary directed graph, there may be cycles (i.e pathways you can take that will lead you back to the starting node), either by intent or as a data error. You might want to put in code to detect that as well.

Similar Messages

  • What release level of code do you get when you download a disk image (ISO/IMG) of a click-to-run Office 2013 product?

    I’ve asked a similar question to this on an Office Community Forum but nobody knows the answer.  So
    I was hoping that an IT professional responsible for installing and deploying Office 2013 might know.
    If you download an installation disk image (ISO/IMG) of a click-to-run Office 2013 product from office.microsoft.com/myaccount (Account Options:
    Install from a disc > I want to burn a disc), what release level of code do you get? 
    Now that Service Pack 1 is available, do you get a disk image with Service Pack 1 incorporated? 
    Or do you still get the RTM (Release to Manufacturing) level of code?

    Diane, thank you for the reply.
    I was hoping that someone who has downloaded a disk image since SP1 became available would be able to confirm that for a fact.
    I have just found a good description of Click-to-Run on Technet that I didn't know existed.  The link is: http://technet.microsoft.com/en-us/library/jj219427(v=office.15)
    The article states:
    "Click-to-Run products that you download and install from Microsoft are up-to-date from the start. You won’t have to download and apply any updates or service packs immediately after you install the Office product."
    This statement is probably true if you click the Install button on office.microsoft.com/myaccount and install "in real-time" over the Internet.  However, I know for a fact that it is not true if you order a backup disk (Account
    Options: Install from a disc > I want to purchase a disc).  All you get is RTM level code (15.0.4420.1017).
    So I still feel uneasy about what release level you get if you download a disk image.  I don't want to download what might be the better part of 1GB of code over the Internet only to discover it is back level.
    Addendum (6 April 2014): I decided to perform an experiment.  I looked at the size of the data on my backup disk which contains RTM level code.  It is 2.04GB.  I then went to office.microsoft.com/myaccount and clicked to download a disk
    image of my Office 2013 product.  The pop-up window that asks me whether I want to Run or Save the file informed me that the size of the file is 2.04GB!!!  I cancelled the download.  I strongly suspect that, if I had continued with the
    download, I would have received the same RTM level code I already have dating back to October 2012.  I think this is awful customer service.  To some extent, I can understand the logistical problems of replacing backup disks lying in warehouses
    waiting to be shipped.  But I cannot understand why disk images on download servers cannot be refreshed quickly.

  • Whats the difference between these two queries ? - for tuning purpose

    Whats the difference between these two queries ?
    I have huge amount of data for each table. its takeing such a long time (>5-6hrs).
    here whice one is fast / do we have any other option there apart from listed here....
    QUERY 1: 
      SELECT  --<< USING INDEX >>
          field1, field2, field3, sum( case when field4 in (1,2) then 1 when field4 in (3,4) then -1 else 0 end)
        FROM
          tab1 inner join tab2 on condition1 inner join tab3 on condition2 inner join tab4 on conditon3
        WHERE
         condition4..10 and
        GROUP BY
          field1, field2,field3
        HAVING
          sum( case when field4 in (1,2) then 1 when field4 in (3,4) then -1 else 0 end) <> 0;
    QUERY 2:
       SELECT  --<< USING INDEX >>
          field1, field2, field3, sum( decode(field4, 1, 1, 2, 1, 3, -1, 4, -1 ,0))
        FROM
          tab1, tab2, tab3, tab4
        WHERE
         condition1 and
         condition2 and
         condition3 and
         condition4..10
        GROUP BY
          field1, field2,field3
        HAVING
          sum( decode(field4, 1, 1, 2, 1, 3, -1, 4, -1 ,0)) <> 0;
    [pre]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    My feeling here is that simply changing join syntax and case vs decode issues is not going to give any significant improvement in performance, and as Tubby points out, there is not a lot to go on. I think you are going to have to investigate things along the line of parallel query and index vs full table scans as well any number of performance tuning methods before you will see any significant gains. I would start with the Performance Manual as a start and then follow that up with the hard yards of query plans and stats.
    Alternatively, you could just set the gofast parameter to TRUE and everything will be all right.
    Andre

  • How can I find out what top level VI TestStand cannot find a sub-VI for?

    I am trying diligently to build a TestStand deployment and keep getting errors. TestStand is telling me that an error has occurred because it cannot find a sub-VI in a certain path. The problem is it does not tell me what VI it cannot find a sub-VI for. It just gives me the path that it is trying to find the sub-VI in. Is there any way I can tell what top level VI in a sequence TestStand is trying to find the sub-VI for? Thanks in advance for any help.

    Josh - I did not have any disbale diagram structures in the sub-VIs that were giving me trouble. Nor did I have any read only VIs. I mass compiled my entrire directory structure...twice. I had no choice but to go through every single sequence and open every VI. Once opened there were three that did not have the correct directory for a sub-VI and need saved. There were no errors due to a Labview mismatch. For some reason these three VIs did not save the sub-VIs new directory therefore causing trouble during my TestStand deployment build process. The thing I find odd about this is there are hundreds of other VIs that use these same sub-VIs in the same new directory that mass compiled fine. I am continuing my TestStand deployment build process and I will see how it goes from here. I definitely got further, after finding these three errros, than I have before. Thanks for the help.

  • I have an iMac S/N QP******VUW currently running OSX 10.5.8. 2.16 GHz Intel Core 2 Duo with 2GB 667 MHz RAM. To what OSX level can I update this model?

    I have an iMac S/N QP******VUW currently running OSX 10.5.8. 2.16 GHz Intel Core 2 Duo with 2GB 667 MHz RAM. To what OSX level can I update this model?
    <Edited By Host>

    Noel Kuck wrote:
    I tried adding more ram but the Mac would't even boot with 2x2GB installed. I figured that it was limited so I forgot about it. Wha's the advantage of adding it if it is not recognized?
    Chances are you either did not install or correctly or added the incorrect RAM.

  • What privilege level is required...

    We are looking to possibly delegate setting up AnyConnect to our Helpdesk (limited to ASDM, adding Apple UDIDs to a Access Policy.)  The question I have is what privilege level should be assigned that will allow them to add the UDID and limit (as much as possible) other changes?

    You will need to define local command authorization at custom privilege level at a level between 1-15 and assign the necessary commands to it (e.g Access-list, Configure, cmd in your example). Then assign your Helpdesk usernames that privilege level.
    I don't believe you can restrict which access-lists they can edit - that's outside the scope of what you can do with ASDM (or the cli). you'd have to move to CSM or an external portal with more role-based access control tools built-in to get that granular.
    See this section of the ASDM Configuration Guide for details.

  • Note 233101.1: what is level 29 ?

    Dear Experts,
    A question about Note 233101.1:
    In the statement 'alter session set events 'immediate trace name heapdump level 29'; what is the meaning of 29 for the level ? I read about levels 1,2,4,8,16 and 32 but can't imagine what 29 stands for.
    Thanks for any input you may have.
    Regards,
    Guillaume

    LEVEL is a pseudo-column. That means that you get that as one of the columns whenever you run recursive SQL in Oracle, even though your input tables don't contain that column. Other such columns are provided to detect leaf-level nodes, cycles etc.
    The lpad function is used to provide an indentation that makes it convenient to read. Whatever is returned is padded using the space character level-1 times. the minus-one is so that the top level does not have any indentation at all. Try the query with and without it to see what it means.
    If you are running it from SQLPLUS, you might want to name the column, then format it so that you can see it nicely.
    By the way, in an arbitrary directed graph, there may be cycles (i.e pathways you can take that will lead you back to the starting node), either by intent or as a data error. You might want to put in code to detect that as well.

  • Mixing traditional and recursive queries

    Hello,
    I'm trying to solve this problem on a single SQL statement but I haven't been able to do it. Let me explain.
    I've a plain table where I'm doing a single select without problems.
    There is another recursive table that is used to retrieve the hierarchical structure for a selected key item that is used as a START WITH clause.
    The problem appears when, for each record selected on first query, I would like to retrieve the leaf records for it's structure.
    I assume that if I can use the key value obtained from the first query as a START WITH clause on the recursive on I'll get the required information.
    But I don't know how to do it.
    Are there anyone that can help me?
    Thanks.

    Hi,
    Thanks for posting the sample data; that's exatly what poeple need to help you.
    user12132557 wrote:
    If I launch SELECT ID,DESCRIP FROM ITEMS WHERE YEAR >= 2005
    I'll get last 3 records.
    For each one I would launch a select like
    SELECT ITEM,DESCRIP FROM TREE
    START WITH ITEM=+column_id_last_query+
    CONNECT BY PRIOR ITEM = PARENTIt's fine to describe the results you want, but there's no substitute for actually posting the results you want.
    Is this what you're trying to produce, given the data you posted and the input parameter 2005?
    `   PARENT       ITEM DESCRIP
             5         50 Item 50
            51         53 Item 53
            51         54 Item 54
            51         55 Item 55Whenever you post formatted text (such as query results) on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.
    One way to get the results above in Oracle 9 is:SELECT     *
    FROM     tree
    WHERE     item     NOT IN     (
                   SELECT parent
                   FROM tree
    START WITH     parent IN (
                   SELECT id
                   FROM items
                   WHERE year     >= 2005
    CONNECT BY     parent     = PRIOR item
    Unfortunately, I don't have an Oracle 9 database, so I have to test it on a higher version.  I don't believe it uses any features that aren't available in Oracle 9.  In particular, it uses a clumsy and inneficient sub-query in the main WHERE clause instead of CONNECT_BY_ISLEAF, since that pseudo-column was only introduced in Oracle 10.
    Finally, I would like to know if this could be done in a single SQL query.Do you mean can it be done without sub-queries?  No, I don't think so.
    By the way, I'm using Oracle9i. I tagged my first post with this label. On next posts I'll write it in the text.Good move; there's an excellent chance that people won't notice the tags.  Post the full versions, e.g. Oracle 9.2.0.2.0.  Sometimes the finer divisions are significant.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • What's wrong with my recursive file search that add files to hsqldb?

    I'm pretty new to databases under java. I choose hsqldb because I needed an embedded database. My program runs through all the files in a directory and stores the results to a database. Later on I can do advanced, fast searches on the results plus carry out effects like permissions on files. I wrote a recursive search and I got the tutorials off of the hsqldb site. The following code works but I don't know if it's good on memory or whether there's some potential problems I can't see. Can someone take a look and tell me if this looks ok? Also how can I find out whether a table exists or not?

    Can't believe I forgot to post the code:
    package databasetests;
    import java.io.*;
    import java.sql.*;
    import java.util.*;
    public class DatabaseTests {
      static Connection conn;
      public DatabaseTests(String x) {
          try {
            Class.forName("org.hsqldb.jdbcDriver");
            conn = DriverManager.getConnection("jdbc:hsqldb:file:" + x, "sa", "");
          } catch (Exception e) {}
      public void shutdown() throws SQLException {
        conn.close();
      public synchronized void query(String x) throws SQLException {
        Statement st = null;
        ResultSet rs = null;
        st = conn.createStatement();
        rs = st.executeQuery(x);
        dump(rs);
        st.close();
      public synchronized void update(String x) throws SQLException {
        Statement st = null;
        st = conn.createStatement();
        int i = st.executeUpdate(x);
        if (i==-1) System.out.println("db error: " + x);
        st.close();
      public static void dump(ResultSet rs) throws SQLException {
        ResultSetMetaData meta = rs.getMetaData();
        int colmax = meta.getColumnCount();
        Object o = null;
        for (; rs.next(); ) {
          for (int i=0;i<colmax;++i) {
            o = rs.getObject(i+1);
            System.out.println(o.toString() + " ");
          System.out.println(" ");
      public static void recursiveSearch(String x,int level) {
        DatabaseTests db = null;
        try {
          db = new DatabaseTests("database");
        catch (Exception ex1) {
          ex1.printStackTrace();
          return;
        String tempFile="";
        StringTokenizer tokenString = new StringTokenizer("");
        File dir = new File(x);
        File[] curDir = dir.listFiles();
        for (int a=0;a<curDir.length;a++) {
          if (curDir[a].isDirectory()==false) {
            System.out.println(curDir[a]);
            try {
              tempFile=curDir[a].toString();
              System.out.println(tempFile);
              db = new DatabaseTests("database");
              db.update("INSERT INTO tblFiles (file_Name) VALUES ('" + tempFile + "')");
              db.shutdown();
            } catch (Exception ex2) { ex2.printStackTrace(); }
          else {
            try {
              tempFile=curDir[a].toString();
              System.out.println(tempFile);
              db = new DatabaseTests("database");
              db.update("INSERT INTO tblFiles (file_Name) VALUES ('" + tempFile + "')");
              db.shutdown();
            } catch (Exception ex2) { ex2.printStackTrace(); }
            tokenString = new StringTokenizer(tempFile,"\\");
            if (tokenString.countTokens()<=level) recursiveSearch(tempFile, level);
      public static void main(String[] args) {
        DatabaseTests db = null;
        try {
          db = new DatabaseTests("database");
        catch (Exception ex1) {
          ex1.printStackTrace();
          return;
        try {
          //db.update("CREATE TABLE tblFiles (id INTEGER IDENTITY, file_Name VARCHAR(256))");
          db.query("SELECT file_Name FROM tblFiles where file_Name like 'C:%Extreme%'");
          db.shutdown();
        catch (Exception ex2) {}
        String tempFile="";
        int level=5;
        tempFile="C:/Program Files";
        //recursiveSearch(tempFile,level);
    }

  • What kernel level is the newest Solaris 10 [ 11/06 ]

    Sorry for a seemingly trivial question. Just feel very frustrated that i cannot get a very simple answer from the overly complicated Sun web sites. Got official email telling me to download the Solaris 10 update. Of course i logged in, and it is useless and does not work [ as usual ]. If you do obtain the Solaris 10, 11/06 update and install it and do a "uname -a" then what does your kernel patch level display as ? Any insights appreciated
    -gregoire

    I think my problem is the result of the way the .ISO was burned to DVD. I can't remember exactly how it was done, but it could've been the DVD was burnt too quickly for the media for example. Looking at the DVD, most of the files appear 'empty' - they are the right size, but contain nothing but white space. If I mount the .iso image as:
    mount -F hsfs -o ro `lofiadm -a /var/spool/pkg/sol-10-GA-sp-dvd.iso` /mnt
    and look at the files, they contain the correct data. Using 'liveupdate', I can perform an upgrade by using the location where the .iso is mounted.
    So I'm pretty confident my problem lies with the way the DVD was created, so I'm going to try some different media and make sure the write speed is correct this time!
    Iain

  • SQL 2008 R2 standard reports - question about server dashboard and what it refers to as "adhoc" queries

    So I have started looking at some of the standard reports available with SSMS and in particular, the "server dashboard".   One thing that caught my attention were the charts that referred to "adhoc" queries.  I wondered how
    those were being defined, and as I expected, they are most likely those statements not in a stored procedure.  This was answered in
    this thread.  
    On a particular server Im interested in, this % value is well over 50% and the primary applications that interact with the databases on this system are Microsoft based products such as Dynamics and another commercial application which I know uses hundreds
    of stored procedures.  Now, Im sure there are some sql statements being used, possibly "dynamic" type sql, by these applications, but would the metrics really be skewed this far?
    What these charts tell me, with the "adhoc" statement types pushing CPU and Disk I/O %  this far, is that there is a BUNCH of these statements being run against the various databases.  The disk I/O might be a bit off since I only recently
    added dozens of missing indexes, but my question is this:
    With the "adhoc" type statements taking up this much of the CPU and Disk resources, can we say that there are likely a lot of these going on ?  I suppose one way to find out is to launch profiler and listen in while there is moderate
    to heavy user activity.
    Thoughts?

    Hello,
    Adhoc queries are DML statements with not parameterization. Sometimes users use this statements, and sometimes these statements come from applications.
    Use Query #2 on the following link to identify those adhoc queries:
    http://mssqlfun.com/2013/04/08/dmv-5-queries-runing-are-adhoc-or-proc-single-or-multi-use-sys-dm_exec_cached_plans/
    Identify if those adhoc queries belong to specific users or applications.
    One of the options you have to deal with is is the following configuration option:
    sp_CONFIGURE 'show advanced options',1
    RECONFIGURE
    GO
    sp_CONFIGURE ‘optimize for ad hoc workloads’,1
    RECONFIGURE
    GO
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • What transaction level should I use?

    Hi all;
    I have a nightly daemon that bills all customers who's monthly bill is this night. For a given customer, in a transaction I verify that they still have not been billed, run the charge through paypal, update their record based on the results of the charge,
    and then commit the transaction.
    I do a distinct transaction per customer but that paypal part can take awhile (we have the could hit of paypal times the cloud hit of Sql Azure). The only time there will be a rollback is if an exception is thrown. So if I write my code correctly - never.
    During this time we could well have some selects against that same record. But it's very unlikely we'll have an update.
    What IsolationLevel will be best for this case.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    Hi,
    Choosing the appropriate isolation level depends on balancing the data integrity requirements of the application against the overhead of each isolation level. The highest isolation level, serializable, guarantees that a transaction will retrieve exactly
    the same data every time it repeats a read operation, but it does this by performing a level of locking that is likely to impact other users in multi-user systems. The lowest isolation level, read uncommitted, may retrieve data that has been modified but not
    committed by other transactions. All of the concurrency side effects can happen in read uncommitted, but there is no read locking or versioning, so overhead is minimized.
    More information :
    http://msdn.microsoft.com/en-us/library/ms189122.aspx
    http://social.technet.microsoft.com/wiki/contents/articles/1639.handling-transactions-in-windows-azure-sql-database.aspx
    Regards,
    Mekh.

  • What do levels "mean"?

    I understand members are awarded points for answering questions or adding insight to topics, but what do you gain when you go up a level? Special privileges? Free stuff? (yeah right) New users also need to be aware that they need to mark posts as helpful or solved, I see too many people not do this, but I feel like asking them is wrong. I never forget, so any help will be rewarded.

    At Level 2 you receive the ability to "notify" posts - click a link to send a message to the Hosts about an inappropriate or duplicate post. Levels 4 & 5 have access to "The Lounge."
    Notice the "void" aka Level 3. Sort of like the middle child.
    Regarding "helpful"/solved", if a solved post is left "unanswered", or no "solved/helpful" is awared where one is warranted, I usually respond in this fashion. I think the majority of those seeking help don't notice the "helpful"/"solved" option. It seems obvious to me, but I've been aware of this issue for some time. Certainly, the way it's presented could be different, and perhaps encourage more cooperation. This has been debated in this forum for the past few months. Eventually, I guess the system will change. Glad you brought up the subject.

  • At what run level to power off

    Is run level 0 (the ok prompt) the best "place" to be when powering of the box (Sunfire V280) ??
    Thanks

    Yes, that would be a good choice.
    Here is the Sunfire 280R in the Sun System Handbook:
    http://sunsolve.sun.com/handbook_pub/Systems/SunFire280R/SunFire280R.html
    There is a link on that resource page for the system's documentation.
    The User Guide and the Service Manual both discuss this topic for you.
    ( see page 61 of the Owner's Guide or page 5 of the Service Manual )

  • What is the transaction to test queries?

    Hi,
    I would like to test queries without using the BEX.. is there a transaction i can execute in a BW system?
    Thanks,

    Re-hi,
    After launching my query, i click on input help for the Product ID field, and get this error:
    "Storage form of product ID not yet defined in Customizing".
    Is the BI Content Developer responsible for this, or as indicated, System Admin?
    Thanks again,
    Long text:
    Storage form of product ID not yet defined in Customizing
    Message no. COM_PRODUCT_SETTINGS000
    Diagnosis
    You have specified a purely numeric product ID. Since product IDs of this kind can be saved
    in different ways, the way required needs to be defined in your system.
    Procedure
    Ask your system administrator to define how purely numeric product IDs are to be saved.
    Procedure for System Administration
    The system could not convert the product ID to the database format because it has not yet
    been defined whether product IDs are to be saved lexicographically or not. Make the setting
    required in Customizing for Products in Define Output Format and Storage Form of Product IDs.

Maybe you are looking for

  • Unable to change stock posting date at usage decision while inspecting HUs

    If we were using materials without WMS it's simple: thereu2019s a button in the screen for stock posting by which we're able to change document date and posting date; but we're using WM and the screen is slightly different: the button I'm referring i

  • OO ABAP --urgent need

    hi all please send me material or pdf documents or links for learning ABAP objects starting from beginning to indeep of OO ABAP. It is very urgent.... please send me information on all types of ALVs also..

  • Connecting AX to a quicksilver G4

    My DSL modem is connected to a Linksys wireless router. I want to be able to be wireless with my Power Mac. Can I just simply connect the AX directly to the quicksilver via the available Ethernet port? Or do I absolutely need to equip the G4 with an

  • USB drive to Time Capsule : slooooow

    Hello, I have a Time Capsule with a WD usb drive attached to it (FAT32). My Macbook is connected via ethernet to the Time Capsule for max speed. When I copy a 700MB file to the Time Capsule, it takes about 6 minutes. When I copy the same file to the

  • Publish site changes is grayed out???

    I updated my iweb site and saved it but it won't let me publish the update? What is happening??? The Publish site changes is grayed out. I checked and all the info for the site is correct. tested the connection and everything is OK, but I still can't