Recursion in C, efficiency vs. elegance

From what I have been told, recursive functions are much loved in C programming due to their "elegance" and the compactness of their code, and because they can be easier to understand than their iterative equivalents.
On the other hand, recursive functions are spectacularly inefficient in C. They use more memory with each recursion, and the context switching due to calling the function over and over again also slows things down. This seems to me like it's mostly a losing proposition.
There is the readability/maintainability perspective... But C conventions already encourage terseness and efficiency over readability (at least if the K&R text is anything to judge by). If your code is already compact and mostly inscrutable, what's the point of making it a little more readable at the cost of completely destroying its performance? Is there something here I'm not understanding?

Grinch wrote:A smart solution bringing better efficiency is a solution which solves it's problem faster, thus using fewer clockcycles. What do you think profiling your code is about?
This thread is discussing whether we should structure code so that is efficient or maintainable. I'm stating that the biggest gains come from how you solve the problem, only in a few cases it is necessary to optimise small sections at the expense of clarity. 99% for maintenance, 1% for efficiency. I enjoy profiling, it is interesting to discover what is taking the time and consider ways of improving this.
Grinch wrote:I don't know what 'tranced' means, and also what does 'tuned to perfection' mean in this context if it does not involve using a 'better algorithm'???
Tranced -> easily out performed. Once the optimum algorithm is being used, other improvements are marginal. Yes, code written badly may have a large effect.
Grinch wrote:There are certainly areas in programs where there's absolutely no point spending time optimizing, like parts which are called very seldom, obviously you will focus your optimization on areas which are heavily used during the programs execution, again profiling allows you to easily pinpoint these areas.
In complete agreement - this will be the majority of the code in a reasonably sized application.
Grinch wrote:
I don't see this happening, dividing functions into smaller segments helps in organizing the code more efficiently but really the compiler register allocator should not have any particular problem with optimizing a larger function than several small ones, unless I'm mistaking what you wrote here. For each call there's also the overhead of potential cache misses, stack management etc, there's a reason there's an 'inline' keyword (although compiler heuristics have become better at automatically determining when to inline, particularly when the function is defined as 'static' or the compiler is using link-time-optimization which allows it to optimize the program as a whole entity rather than on a file-by-file basis).
You said there have been experiments, can you point me to these?
There is a limited number of registers and local proximity of data/jumps results in efficiency gains. The trainers of a functional programming course specifically performed this experiment to show that breaking large sequential blocks into functions actually improves performance.
Grinch wrote:Optimized code does not equal non-maintainability, I'm not sure what you mean by 'flexibility' but certainly in performance terms it's most often so that a special solution particularly tailored for the problem at hand will beat a generic solution, often by a wide margin depending on just how optimized the 'special solution' is and of course what the problem is and what possibilities for optimization it offers.
Also agree with you on this point. Flexibility - this is where the same code can be easily reused without modification to perform a similar task. When software is configured, tested and scrutinised, it is better if this does not have to be updated - see factory or decorator patterns etc.

Similar Messages

  • How to cache VI ref in recursive chain?

    I am writing a recursive VI - a VI that calls itself.
    The VI generates an image by recursively subdividing the image rectangle until it finds a rectangle small enough to work on.
    (It's a mandelbrot image generator, if you must know... :-)
    It calls itself by opening a VI reference to itself (using OPEN VI REFERENCE), calling that via CALL BY REF, and then closing the reference.
    These calls might get to a level of 20 or 25 deep.
    It works. 100 %.
    In thinking that I could speed it up a bit, I tried to cache the VI reference. I was thinking that each OPEN VI REF was having to do a whole bunch of stuff. If I could just open the reference once, and store it, then I wouldn't have to do all that stuff every time.
    So I tried that. I opened the reference ONCE, and stored it in a global variable. When I need to recursively call myself, I used that reference.
    No Go.
    The recursive call returns an error message:
    Error 1042 occurred at Call By Reference Node
    in Calc Mandel Rectangle.vi
    ->Calc Mandel Rectangle.vi
    ->Get Mandelbrot Image.vi
    ->Mandelbrot Test.vi
    Possible reason(s):
    LabVIEW: Attempted recursive call.
    Notice that the error occurs on the SECOND level down, not the first.
    My hypothesis is that a ref to a VI running at level 1 is DIFFERENT from a ref to the same VI called from itself (running at level 2).
    I compared the global ref to a local one, and it said they were different.
    When you generate the ref each time, it accounts for this, but when you stash the ref before the first call, it is only valid for the first level.
    I have tried opening the reference with OPTIONS = 8, which is supposed to mean prepare for recursive call, but that makes no difference.
    I'm on LV7.1, OS X.
    QUESTIONS:
    1... Is there a way to stash a VI ref so that it applies to any level?
    2... Does the OPEN VI REF function take enough time to worry about?
    3... Is there something I am missing in this approach?
    Message Edited by CoastalMaineBird on 01-24-2006 12:36 PM
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

    1. No. each level of recursion must open a separate VI reference to work with a new instance of the VI.
    OK, that's what I surmised by observation. Since the error message came from TWO levels deep, it seems that the first level was OK, but it choked when trying to go deeper.
    2. The VI is already in memory so the time penalty is less severe to reinstanciate it than to open it from file.
    This is true if you DO NOT CLOSE it. Normally I'm in the habit of closing the ref when I'm done with it. In this case, I would subdivide a rectangle into 4 quadrants, open a ref, call myself 4 times, and close the ref. If I omit the CLOSE part, the time penalty is substantially reduced (probably because the next OPEN REF re-uses the same info).
    However if you recursively call the VI in a loop, you can open a VI reference outside the loop and reuse it in the the loop, closing it afterwards outside the loop.
    Yes, I was doing that. But the CLOSE function apparently causes the next OPEN REF to have to do all the work over again.
    Usually when the user doesn't notice, I don't worry about time optimization.
    Sensible, but when a C version produces a picture in about 1/100 the time, I would call that noticable. And I am trying to produce a QuickTime movie of these images, so I'd like to generate them as fast as possible.
    3. Recursion in LabVIEW is not very efficient.
    I am coming to realize that. I have used it to delete a disk folder hierarchy, and to traverse a VI and its children, and its children's children, etc., but I wasn't concerned with speed in either case. Now I am.
    There is always a way to code your VI in such a way that it doesn't use recursion by building a data stack (data for each level in an array stored in a shift register of your loop).
    I will look into that. This project was intended to introduce my apprentice programmer to the idea of recursion, and it's an elegant use of it in theory. However in practice, it's dog slow.
    Thanks for your thoughts.
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

  • Detecting changes

    hi folks,
    I have a swing application that has a functionality to save the input. I want to know what is the common way to detect modification of input. If there's modification, i want to prompt the user to save if he/she is closing down the app. I'm thinking about using a listener on all input components(JTextField, JList, JComboBox,JTable,JButton). But the listener would be different for each type of component, ActionListener for JButton, CaretListener for JTextField, ListSelectionListener for JList, ect. So I want to write a class that implements all these Listeners, and if any of those events is triggered, then I know there's modification been made.
    My question is: is there a better/more efficient/more elegant way of detecting changes in my app?
    If not, any comments on my approach?
    thanks!!

    You can write an unique class implementing DocumentListener, ActionListener, ItemListener, etc..
    But you still have to add the listener to all the components.
    I'm thinking you can write a method void register(JComponent pane) Depending on the class JTextField,JCheckBox, etc... you add the class as a DocumentListener,
    as an ActionListener, etc. If the component is a container, you can recursively invoke the method
    for all the children

  • How to save sections of a single XML Document to multiple tables ?

    Firstly, I apologise for the long e-mail but I feel it's necessary in order to clarify my problem/question.
    The XML document representation below stores information about a particular database. From the information in the XML document you can tell that there is a single database called "tst" which contains a single table called "tst_table". This table in turn has two columns called "CompanyName" & "Country".
    I want to use Oracle's XML SQL Utility to store this information into three seperate database tables. Specifically, I want to store the information pertaining to the database (i.e. name etc.) in one table, the information pertaining to the table (name, no. of columns etc.) in another and the information pertaining to the columns (name, type etc.) in yet another table.
    I have seen samples where an entire XML Document is saved to a database table but I cannot find any examples where different sections of a single XML Document are saved into different database tables using the XML SQL Utility.
    Can you please tell me the best approach to take in order to accomplish this . Does it involve creating an XMLDocument and then extracting the relevant sections as XMLDocumentFragment's, retrieving the String representations of these XMLDocumentFragment's and passing these strings to the OracleXMLSave.insertXml() method.
    Is this the best approach to take or are there any other, perhaps more efficient or elegant, ways of doing this ?
    Thanks in advance for your help
    - Garry
    <DATABASE id="1" name="tst">
    <TABLES>
    <TABLE name="tst_table">
    <NAME>Customers</NAME>
    <COLUMNS>
    <COLUMN num="1"> <COLID>2</COLID>
    <COLNAME>CompanyName</COLNAME>
    <COLTYPE>Text</COLTYPE>
    </COLUMN>
    <COLUMN num="2">
    <COLID>3</COLID>
    <COLNAME>Country</COLNAME>
    <COLTYPE>Text</COLTYPE>
    </COLUMN>
    </COLUMNS>
    </TABLE>
    </TABLES>
    </DATABASE>
    null

    See this thread;
    {thread:id=2180799}
    Jeff

  • XSLT Trasnformation (Error message)

    Hi,
    I want to create a xslt transformation for xml. I have a string field where i have included the content of this xml.
    When i want to call the transformation i get the message error in the debugger.
    "Bei XML-ABAP Transformation wurde das Element abap erwartet."
    In english it means that he excepted the element abap for the xslt transformation.
    Here is my xslt transformation:
    <xsl:transform version="1.0"
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns:sap="http://www.sap.com/sapxsl"
    xmlns:asx="http://www.sap.com/abapxml">
    <xsl:output method="xml"/>
    <xsl:template match="*">
    <xsl:if test="@DateTime">
        <xsl:value-of select="@DateTime"/>
    </xsl:if>
    <xsl:if test="@ReaderID">
        <xsl:value-of select="@ReaderID"/>
    </xsl:if>
    </xsl:template>
    </xsl:transform>
    In this transformation I want to get the value of the tags date time and readerid, but i'm not sure ift the coding of my xslt is really ok.
    Thanks in advance for your help.
    Muhammet

    Hi Muhammet,
    I think now I understand your problem: 
    You want a list of pmlcore:Tag data and each one contains the information of pmlcore:DateTime and the ReaderID element - but both elements are not children of pmlcore:Tag.
    Of course this is possible in XSLT. Therefore there is an own language called XPath that address elements within the whole XML tree. You already used XPath within the select-Attribute in xsl:for-each, xsl:value-of and in xsl:template, too.
    There are a lot of possibilities to solve this problem in XPath:
    <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:pmlcore="urn:autoid:specification:interchange:PMLCore:xml:schema:1" xmlns:pmluid="urn:autoid:specification:universal:Identifier:xml:schema:1">
      <xsl:template match="/">
        <asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
          <asx:values>
            <ROOT>
              <LIST>
                <xsl:for-each select="/pmlcore:Sensor/pmlcore:Observation/pmlcore:Tag">
                  <item>
                     <ID><xsl:value-of select="string(pmluid:ID)"/></ID>
                     <DATATIME><xsl:value-of select="string(../pmlcore:DateTime)"/></DATATIME>
                     <READERID>
                        <xsl:value-of select="string(../pmlcore:Data/pmlcore:XML/ReaderID)"/>
                     </READERID>
                  </item>
                </xsl:for-each>
              </LIST>
            </ROOT>
          </asx:values>
        </asx:abap>
      </xsl:template>
    </xsl:transform>
    In this case I changed the result of the transformation (i.e. the ABAP structure). For the sake of simplicity I used a local structure but you can also create those types within the DDIC. We deserialize a list. Within each item there are four elements: ID, DATATUME and READERID. Because we want an entry for each tag we use xsl:foreach. The ID element gets the value for the pmluid:ID child. DATATIME and READERID are not within the same pmlcore:Tag elements because the belong to pmlcore:Observation . In XPath we can adress the this parent element using ".." and so the XPath expresseion for the wanted element is ../pmlcore:DateTime resp. ../pmlcore:Data/pmlcore:XML/ReaderID.
    And this is the result:
    <asx:abap xmlns:asx="http://www.sap.com/abapxml" xmlns:pmlcore="urn:autoid:specification:interchange:PMLCore:xml:schema:1" xmlns:pmluid="urn:autoid:specification:universal:Identifier:xml:schema:1" version="1.0">
      <asx:values>
        <ROOT>
          <LIST>
            <item>
              <ID>300833B2DDD9048035050000</ID>
              <DATATIME>2008-07-04T17:41:26</DATATIME>
              <READERID>Readpoint 1</READERID>
            </item>
            <item>
              <ID>3114F4E4E4BA2C2B15000000</ID>
              <DATATIME>2008-07-04T17:41:26</DATATIME>
              <READERID>Readpoint 1</READERID>
            </item>
          </LIST>
        </ROOT>
      </asx:values>
    </asx:abap>
    Please remark that the values for DATATIME and READERID are the same because there is only one observation in your sample document.
    Of course there are even more possibilities. Now I show  a completly different and more generic approach that produces the same result:
    <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:pmlcore="urn:autoid:specification:interchange:PMLCore:xml:schema:1" xmlns:pmluid="urn:autoid:specification:universal:Identifier:xml:schema:1">
      <xsl:template match="/">
        <asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
          <asx:values>
            <ROOT>
              <LIST>                         
                <xsl:apply-templates select="/pmlcore:Sensor/pmlcore:Observation/pmlcore:Tag"/>
              </LIST>
            </ROOT>
          </asx:values>
        </asx:abap>
      </xsl:template>
      <xsl:template match="/pmlcore:Sensor/pmlcore:Observation/pmlcore:Tag">
        <item>
          <ID><xsl:value-of select="string(pmluid:ID)"/></ID>
           <DATATIME><xsl:value-of select="string(../pmlcore:DateTime)"/></DATATIME>
           <READERID><xsl:value-of select="string(../pmlcore:Data/pmlcore:XML/ReaderID)"/></READERID>
        </item>
      </xsl:template>     
    </xsl:transform>
    Here we define two templates. The first matches for the root-element and then starts the processing using xsl:apply-templates for a specific set of nodes. Then the XSL processor select those nodes and looks for a certain template to apply - i.e. the second template in our case. With these techniques you can create very generic transformations and with the help of recursion you can code very elegant and powerful transformations.

  • Rotation problem after editing in Photoshop

    I'm using Lightroom 1.3 and Photoshop CS3 with all the current updates. This problem was the same in LR1 and PS CS2.
    I have a mix of orientations in the JPEGs in my library - portrait and landscape. When the photos are imported into LR they appear correct. If I select a bunch of them, open them in PS (using Edit in Photoshop > Edit Originals), run a noise filter, save them, and then go back to LR, the previews for the portrait images are rotated so that all the previews show as landscape.
    The portrait images display correctly (except for the fact that they have been rotated) in the Library module - both grid and loupe view). In the Develop module, the image aspect is displayed portrait but the image is as a distorted landscape - the image is rotated and then distorted to fill the portrait oriented space. In the Print module, that same distorted image is fills the portrait oriented page. To be clear, instead of a portrait image appearing in the correct orientation on a portrait page preview, the image is rotated 90 degrees and then stretched to fill the page.
    There is a badge on the preview indicating that the metadata is out of sync with the file.
    If I rotate the image in LR, the badge goes away and the image appears portrait and undistorted in the Library module's grid and loupe views. The Develop module show the portrait image in the correct rotation, but has been stretched to a landscape aspect. The Print module shows that distorted image rotated to fit the portrait page.
    The only way to correct this is to "Read Metadata from file". This resets all the orientation information so that it displays correctly in LR, but all the tags and IPTC information gets erased.
    This is most of the reason why I was writing all the metadata into to files until now. I would write the metadata before editing in PS, do my PS edits, and then back in LR I'd "Read the Metadata" from the files to set everything correctly in LR. But of course now that I'm working with Leopard I get screwed if I do this because my files will crash the Finder.
    To top it all off, I've run into the "...cannot be saved because the file is in use" problem with some files I opened last night.
    Clearly none of these issues are being addressed in Adobe or Apple's updates. Is there a reasonable workaround that will let me:
    1. set metadata and tags and know that they won't either get lost or cause problems
    2. have image orientation respected and displayed correctly
    3. not crash any of the three critical applications on this computer - Lightroom, Photoshop, and Finder.

    I really appreciate your explaining this Jao.
    I'm going to explain how I got into this situation, in the hopes that it may prove useful to someone (I'm looking at you, Adobe) who wants to understand how the app is being used in the real world.
    So as not to break up the narrative, let me just say that I'd be happy (ecstatic, actually) if I could use Lightroom for my entire workflow, but the noise reduction filter in LR is just nowhere near as good as NoiseNinja. (A problem shared by LR, PS, and Aperture.) If I could run NN from inside LR, that would rock.
    Oh, and I shoot JPEG because I shoot live events where the lighting is usually crap and I need to be ready at any moment for something interesting to happen. Shooting RAW would be nice, but it's too slow and takes up way too much space.
    So here's what I do, (hopefully edited to make the reading non-tedious)
    1. import all photos from source (usually memory cards)
    The photos go onto a drive reserved for photos. I shoot events, so everything is organised by date-event. On import I set filenames (date - event name - time stamp.jpg) and keywords.
    2. cull
    They get marked "1" (irretrievably useless, to be deleted), "2" (keep just in case), "3" (decent shots), "4" (photos that I want to show to other people), or "5" (this is one of my best this year). Delete the "1"s.
    3. select 20-30 "picks"
    I create a folder within the event folder called "Picks" for these photos. Then I duplicate that folder as "Originals"
    In doing it this way I can protect myself if I need/want to change my cataloguing application. I'm on my fourth one at this point (Lightroom) and while I am generally finding that LR is meeting my needs more than anything else I've tried, I've been burned in the past (how many of us have regretted letting Aperture put all our photos into an incomprehensible database/folder hell?)
    The Picks/Originals structure makes sure that the originals never get touched (because it doesn't show up in LR) and I can quickly find the Picks.
    I -was- writing the metadata to the files so the ratings, keywords, and IPTC info was where it should be - attached to the photo. I've stopped doing this for the time being until this whole Finder-craching debacle is corrected.
    So why am I not using LR's built "Edit - Copy"/stacking function, which more or less duplicates my process?
    Well part of it was my own dumbness - Aperture made the edited copy/variation a Photoshop file, which ballooned my collections with staggering speed. I'm not sure if LR used to do the same thing, or I just made the assumption. My mistake.
    The other part of the problem is just the simplicity of having an Originals folder that is effectively ignored. Using LR will mix the "-edited" files in with originals, which is not as efficient or elegant a solution, but I'll give it a whirl.
    Thanks for your suggestion. If giving up my Picks/Original process solves the rotation problem then perhaps it's a fair trade.
    Still, it seems that the right hand is somehow oblivious to what the left hand is doing in the whole PS/LR thing, and that it ought not work that way.

  • Using Calendar Picker (Dashboard Prompt) variable in report filter

    Hi All,
    I have checked the forum and it doesn't look like this was answered. I have a presentation variable "CalendarDate" for a dashboard prompt that uses the calendar picker. Has anyone found an easy way to use the presentation variable in a report filter?
    I want to grab the presentation variable then use this as the criterion for the report filter.
    I have tried the below for the report filter but I receive an error. Do I have to cast this value to a Date?
    Date.Date <= @{CalendarDate}
    Or, do I have to get the variable value and convert it to the date 'YYYY-MM-DD' format to use?
    Any thoughts would be appreciated.
    Thanks!
    John

    So here is what I came up with. Probably not the most efficient nor elegant but it works. I am now able to use the calendar picker variable as a date to perform date calculations.
    Basically there are three scenarios for the variable value: D/M/YYYY, (DD/M/YYYY or D/MM/YYYY), DD/MM/YYYY. I use char_length function to find the length. When char length is equal to 9 there are two scenarios. If none of the scenarios work, pass the CURRENT_DATE.
    code below --------
    CASE CHAR_LENGTH('@{CalendarDate}')
    WHEN 8 THEN CAST('date' || ' ' || char(39) || SUBSTRING('@{CalendarDate}' FROM 5 FOR 8) || '-' || CONCAT('0', SUBSTRING('@{CalendarDate}' FROM 1 FOR 1)) || '-' || CONCAT('0', SUBSTRING('@{CalendarDate}' FROM 3 FOR 1)) || char(39) AS DATE)
    WHEN 9 THEN
         CASE WHEN SUBSTRING('@{CalendarDate}' FROM 2 FOR 1) = '/' THEN
         CAST('date' || ' ' || char(39) || SUBSTRING('@{CalendarDate}' FROM 6 FOR 9) || '-' || CONCAT('0', SUBSTRING('@{CalendarDate}' FROM 1 FOR 1)) || '-' || SUBSTRING('@{CalendarDate}' FROM 3 FOR 2) || char(39) AS DATE)
         ELSE
         CAST('date' || ' ' || char(39) || SUBSTRING('@{CalendarDate}' FROM 6 FOR 9) || '-' || SUBSTRING('@{CalendarDate}' FROM 1 FOR 2) || '-' || CONCAT('0', SUBSTRING('@{CalendarDate}' FROM 4 FOR 1)) || char(39) AS DATE)
         END
    WHEN 10 THEN
    CAST('date' || ' ' || char(39) || SUBSTRING('@{CalendarDate}' FROM 7 FOR 10) || '-' || SUBSTRING('@{CalendarDate}' FROM 1 FOR 2) || '-' || SUBSTRING('@{CalendarDate}' FROM 4 FOR 2) || char(39) AS DATE)
    ELSE CURRENT_DATE
    END

  • Programmatically create array from common cluster items inside array of clusters

    I have seen many questions and responses on dealing with arrays of clusters, but none that discuss quite what I am looking for. I am trying to programmatically create an array from common cluster items inside array of clusters. I have a working solution but looking for a cleaner approach.  I have an array of clusters representing channels of data.  Each cluster contains a mixture of control data types, i.e.. names, types, range, values, units, etc. The entire cluster is a typedef made up of other typedefs such as the type, range and units and native controls like numeric and boolean. One array is a “block” or module. One cluster is a channel of data. I wrote a small vi to extract all the data with the same units and “pipe” them into another array so that I can process all the data from all the channels of the same units together.  It consists of a loop to iterate through the array, in which there is an unbundle by name and a case structure with a case for each unit.  Within a specific case, there is a build array for that unit and all the other non-relevant shift registers pass through.  As you can see from the attached snapshots, the effort to add an additional unit grows as each non-relevant case must be wired through.  It is important to note that there is no default case.  My question:  Is there a cleaner, more efficient and elegant way to do this?
    Thanks in advance!
    Solved!
    Go to Solution.
    Attachments:
    NI_Chan units to array_1.png ‏35 KB
    NI_Chan units to array_2.png ‏50 KB

    nathand wrote:
    Your comments made me curious, so I put together a quick test. Maybe there's an error in the code (below as a snippet, and attached as a VI) or maybe it's been fixed in LabVIEW 2013, but I'm consistently getting faster times from the IPE (2-3 ms versus 5-6ms for unbundle/index). See if you get the same results. For fun I flipped the order of the test and got the same results (this is why the snippet and the VI execute the tests in opposite order).
    This seems like a poster child for using the IPES!  We can look at the index array + replace subset and recognize that it is in place, but the compiler is not so clever (yet!).  The bundle/unbundle is a well-known "magic pattern" so it should be roughly equivalent to the IPES, with a tiny penalty due to overhead.
    Replace only the array operation with an IPES and leave the bundle/unbundle alone and I wager the times will be roughly the same as using the nested IPES.  Maybe even a slight lean toward the magic pattern now if I recall correctly.
    If you instantly recognize all combinations which the compiler will optimize and not optimize, or you want to exhaustively benchmark all of your code then pick and choose between the two to avoid the slight overhead.  Otherwise I think the IPES looks better, at best works MUCH better, and at worst works ever-so-slightly worse.  And as a not-so-gentle reminder to all:  if you really care about performance at this level of detail: TURN OFF DEBUGGING!

  • Modify the number of dimention of a geometry

    Is it possible to modify, with a function, the number of dimension of a geometry?
    I'm able to change the gtype but not the sdo_elem_info and sdo_ordinates.
    ....thanks for your help

    Here is a function doing what you ask for.
    It is not very efficient or elegant, but I think it works for most geometries :
    Parameter Numd is the dimension you want of the returned geometry.
    good luck
    Hans
    FUNCTION Convert_to( Numd PLS_INTEGER, Geo IN Mdsys.Sdo_geometry )
    RETURN Mdsys.Sdo_geometry
    -- converts from 4D to 2D or 3D ...
    -- or converts from 3D to 2D ...
    IS
    Resultat Mdsys.Sdo_geometry;
    V_numdims PLS_INTEGER;
    V_numords PLS_INTEGER;
    V_numelem PLS_INTEGER;
    V_typeelem PLS_INTEGER;
    V_startpos PLS_INTEGER;
    V_startposnext PLS_INTEGER;
    V_endpos PLS_INTEGER;
    I PLS_INTEGER;
    P PLS_INTEGER;
    V_curpos PLS_INTEGER;
    V_is_multipolygon BOOLEAN;
    V_gtype PLS_INTEGER;
    V_count PLS_INTEGER;
    V_count_unknown PLS_INTEGER;
    V_ok PLS_INTEGER;
    BEGIN
    Resultat := NULL;
    IF ( Numd < 2 )
    OR ( Numd > 3 ) THEN
    RETURN Resultat;
    END IF;
    IF Geo IS NULL THEN -- no geometry to work with .
    RETURN Resultat;
    END IF;
    V_gtype := Geo.Sdo_gtype;
    IF MOD( V_gtype, 1000 ) = 0 THEN -- Spatial ignores this geometry .
    Resultat := Geo;
    RETURN Resultat;
    END IF;
    IF V_gtype > 1999
    AND V_gtype < 3000 THEN-- This is a 2D geometry - just exit FROM here ..
    Resultat := Geo;
    RETURN Resultat;
    END IF;
    IF V_gtype > 2999
    AND V_gtype < 4000
    AND Numd = 3 THEN
    -- This is a 3D geometry - AND that's want we want - just exit FROM here ..
    Resultat := Geo;
    RETURN Resultat;
    END IF;
    IF NOT Geo.Sdo_point IS NULL THEN -- single point ...
    Resultat := Geo;
    Resultat.Sdo_gtype := 2000 + MOD( V_gtype, 1000 );
    Resultat.Sdo_point.Z := NULL;
    RETURN Resultat;
    END IF;
    IF Geo.Sdo_elem_info IS NULL THEN
    RETURN Resultat;
    END IF;
    IF Geo.Sdo_ordinates IS NULL THEN
    RETURN Resultat;
    END IF;
    V_numdims := TRUNC( Geo.Sdo_gtype / 1000 );
    -- NUMBER of dimensions 2D, 3D OR 4D .
    V_numelem := Geo.Sdo_elem_info.COUNT / 3;
    -- NUMBER of elements in this geometry .
    IF ( MOD( V_gtype, 1000 ) = 1 )
    -- Point type - check for single point in sdo_ordinates .....
    THEN
    V_ok := 1;
    V_count := Geo.Sdo_ordinates.COUNT;
    V_count_unknown := 0;
    IF ( V_numelem = 2 ) THEN
    IF ( Geo.Sdo_elem_info( 2 ) = 0 )
    -- first element is : Unsupported element type. Ignored by the Spatial functions AND PROCEDUREs.
    THEN
    V_count_unknown := Geo.Sdo_elem_info( 4 ) - 1;
    -- num of ordinates in the unknown part ...
    ELSE
    V_ok := 0;
    END IF;
    END IF;
    IF (
    V_ok = 1
    AND V_numelem < 3
    AND V_count - V_count_unknown > 1
    AND (
    ( V_count - V_count_unknown < 5
    AND V_gtype > 3999 )
    -- 4d with one point in sdo_ordinates ..
    OR ( V_count - V_count_unknown < 4
    AND V_gtype > 2999 )))
    -- 3d with one point in sdo_ordinates ..
    THEN
    IF Numd = 2 THEN
    Resultat :=
    Mdsys.Sdo_geometry( 2001, -- GType 2D .
    Geo.Sdo_srid,
    Mdsys.Sdo_point_type( Geo.Sdo_ordinates( V_count_unknown + 1 ),
    -- X ..
    Geo.Sdo_ordinates( V_count_unknown + 2 ), -- Y ..
    NULL ), -- sdo_point_type(x,y,NULL) .
    NULL,
    NULL );
    ELSE
    Resultat :=
    Mdsys.Sdo_geometry( 3001, -- GType 3D .
    Geo.Sdo_srid,
    Mdsys.Sdo_point_type( Geo.Sdo_ordinates( V_count_unknown + 1 ),
    -- X ..
    Geo.Sdo_ordinates( V_count_unknown + 2 ), -- Y ..
    Geo.Sdo_ordinates( V_count_unknown + 3 )), -- Z ..
    -- sdo_point_type(x,y,NULL) .
    NULL,
    NULL );
    END IF;
    RETURN Resultat;
    END IF;
    END IF; -- END of Point type ....
    Resultat :=
    Mdsys.Sdo_geometry( Numd * 1000 + MOD( V_gtype, 1000 ),
    -- GType 2D eller 3D .
    Geo.Sdo_srid,
    NULL, -- sdo_point_type .
    Mdsys.Sdo_elem_info_array( ),
    Mdsys.Sdo_ordinate_array( ));
    IF V_numelem = 0 THEN
    RETURN Resultat;
    END IF;
    IF MOD( Geo.Sdo_gtype, 1000 ) < 7 THEN
    V_is_multipolygon := FALSE;
    ELSE
    V_is_multipolygon := TRUE;
    END IF;
    I := 1;
    V_curpos := 1;
    LOOP -- for all the elements ...
    V_startpos := Geo.Sdo_elem_info( ( I - 1 ) * 3 + 1 );
    -- StartPosition for the ordinates of this element
    V_typeelem := Geo.Sdo_elem_info( ( I - 1 ) * 3 + 2 );
    -- the pos after StartPosition, ie. eType ...
    IF I = V_numelem THEN -- is this the last element ?
    V_startposnext := Geo.Sdo_ordinates.COUNT( ) + 1;
    ELSE
    V_startposnext := Geo.Sdo_elem_info( ( I * 3 ) + 1 );
    END IF;
    Resultat.Sdo_elem_info.EXTEND( 3 );
    Resultat.Sdo_elem_info( ( I - 1 ) * 3 + 1 ) := V_curpos;
    -- StartPosition for the ordinates of this element
    Resultat.Sdo_elem_info( ( I - 1 ) * 3 + 2 ) :=
    Geo.Sdo_elem_info( ( I - 1 ) * 3 + 2 );
    -- eType ...;
    Resultat.Sdo_elem_info( ( I - 1 ) * 3 + 3 ) :=
    Geo.Sdo_elem_info( ( I - 1 ) * 3 + 3 );
    -- Interpretation ...;
    IF V_typeelem > 0 THEN -- ignore element
    IF (NOT V_is_multipolygon )
    OR ( V_is_multipolygon
    AND I > 1 ) THEN
    IF ( V_startposnext > V_startpos + 1 ) THEN
    P := V_startpos;
    -- p is now the position of the first ordinate for this element
    LOOP -- For all ordinates ...
    IF Numd = 3 THEN
    Resultat.Sdo_ordinates.EXTEND( 3 );
    ELSE
    Resultat.Sdo_ordinates.EXTEND( 2 );
    END IF;
    Resultat.Sdo_ordinates( V_curpos ) := Geo.Sdo_ordinates( P );
    -- X ...
    Resultat.Sdo_ordinates( V_curpos + 1 ) :=
    Geo.Sdo_ordinates( P + 1 );
    -- Y ...
    IF Numd = 3 THEN
    Resultat.Sdo_ordinates( V_curpos + 2 ) :=
    Geo.Sdo_ordinates( P + 1 );
    -- Z ...
    END IF;
    P := P + V_numdims; -- jump to the next x ordinate
    V_curpos := V_curpos + Numd; -- var +2 tidligere
    EXIT WHEN P = V_startposnext;
    END LOOP;
    END IF;
    -- ELSE : MULTIPOLYGON does not have its own ordinates
    END IF;
    ELSE
    -- Element has unknown geometry , must copy all ordinates
    Resultat.Sdo_ordinates.EXTEND( V_startposnext - V_startpos );
    P := V_startpos;
    -- p is now the position of the first ordinate for this element
    LOOP -- For all ordinates ...
    Resultat.Sdo_ordinates( V_curpos ) := Geo.Sdo_ordinates( P );
    P := P + 1; -- next ordinate
    V_curpos := V_curpos + 1;
    EXIT WHEN P = V_startposnext;
    END LOOP;
    END IF;
    I := I + 1;
    EXIT WHEN I > V_numelem;
    END LOOP;
    RETURN Resultat;
    EXCEPTION
    WHEN OTHERS THEN
    Resultat := NULL;
    RETURN Resultat;
    END Convert_t0;

  • Docked iPhone 3GS automatically starts or resumes playing/HOW TO STOP

    Last year, I brought my iPhone 3GS to an Apple store and presented the problem of docking the iPhone and having the iPod on the iPhone resume or play from the beginning whatever song had last been played on the iPod. I needed to stop that behavior. One of the tech people showed me how to do that (something about hitting HOME button and power on/off button of phone simultaneously or one after the other or something). I've forgotten how to do this! There must be somebody on this forum who knows this secret formula.
    It's highly annoying to dock the phone in my car and then have the iPod on the iPhone just automatically start playing whatever was last being played. I do not want it to automatically play anything when docked. Right now, the only way I can sort of do this is to switch to a podcast and force the podcast all the way to the end of the episode and then hit DONE. The way that tech showed me was much more efficient and elegant.
    Anybody know how to do this?

    The checkmark is used to include or exclude the checked item in the specified action. For example, only songs with a checkmark are imported when you click Import. Or only checked songs are considered by Genius, or synced to a device. However, checking songs does not exclude them from shuffling.
    You might find this slightly outdated document useful: http://docs.info.apple.com/article.html?path=iTunesMac/8.0/en/symbols.html

  • Applying a filter to "filler"

    Done the search thing, and I haven't found any info...
    For those familiar with Avid: you can apply a filter to an empty track, thereby affecting everything below it.You can also create a transition (blur, flare, etc.) by placing, say, a 10 frame filler clip with the appropriate filter, over an edit point. the filler/efx clip is keyframed to go from 0 to 100% back to 0 for the duration of the effect. Make sense? If not I'll try to clarify with an ascii picture:
    V2_______(empty)___|efx|____(empty)____
    V1___|__(CLIP 1)____|_____ (CLIP 2)__|
    ____(edit)__________(edit)_____________(edit)
    My question: how to do this on FCP? I can find no way to appply a filter to an empty video track. Am I missing something obvious?
    Latest version of FCP Suite, MacBook Pro, etc.
    Thanks,
    Jeff M.
      Mac OS X (10.4.9)  
    iMac indigo 500 DV, etc.   Mac OS X (10.3.9)  
    iMac indigo 500 DV, etc.   Mac OS X (10.3.9)  
      Mac OS X (10.3.9)  

    Quite so. I stopped trying to make FCP an Avid the very first day! There are things I like and things I don't like about each platform.
    So the soluton here would be to do it "the hard way" by adding an edit on each side of the original cut and applyng a filter ramping "in" on the a side and "out" on the b side. Perfectly acceptable and functional, just not as efficient or elegant.
    Since I use this filter on a filler frequently (daily if I'm doing promos or presentation pieces), I wonder why the feature has never been added? I know I'm not alone in using this shortcut. Guess I'll haunt the "features wish list" forums on 2-pop and the like.
    Thanks.
    Jeff M.
    MacBook Pro   Mac OS X (10.4.9)  

  • Programmatically Create/Hide Plots

    All,
    I would like to programmatically set the visibility of plots and tab control pages. The solution I came up with was to place visibility property nodes in a case structure and based on the value of the control, that many plots would be made visible. However, placing all of those property nodes is tedious and I don't think this solution is scalable -- ultimately, I will have many more plots than what I included in the example VI.
    What is the most efficient and elegant method for controlling the visibility of a large number of plots?
    Regards,
    Andrew
    Solved!
    Go to Solution.
    Attachments:
    ProgrammaticallyDisplayPlots.vi ‏69 KB

    Create an array of references for all your graphs.  Now you can programmatically feed them into a For Loop with autoindexing into a Visible Property node. 
    Attachments:
    ProgrammaticallyDisplayPlots[1]_BD.png ‏67 KB

  • Selecting multiple rows a table according to rows passed with a table valued parameter

    Ive got a table, which looks like this:
    CREATE TABLE MyTable (
    MyChars CHAR(3) NOT NULL,
    MyId INT NOT NULL,
    CONSTRAINT PK__MyTable_MyChars_MyId PRIMARY KEY (MyChars, MyId),
    CONSTRAINT FK__MyOtherTable_Id_MyTable_MyId FOREIGN KEY (MyId) REFERENCES MyOtherTable (Id)
    Records look like i.e.:
    Chars | Id
    'AAA' | 1
    'BBB' | 1
    'CCC' | 1
    'AAA' | 2
    'BBB' | 2
    'CCC' | 2
    'DDD' | 2
    'EEE' | 3
    'FFF' | 3
    'AAA' | 4
    'DDD' | 4
    'FFF' | 4
    Now I have a SP, which takes a table valued parameter like:
    CREATE TYPE dbo.MyTVP AS TABLE ( MyChars CHAR(3) )
    This SP should return a set of Ids, which match all the rows of the parameter.
    I.e.:
    if the TVP contains 'AAA', 'BBB' & 'CCC', i get as result 1 & 2
    if the TVP contains 'AAA' & 'FFF', i get as result 4
    if the TVP contains 'BBB' & 'EEE', i get an empty result
    What my SP is currently doing, is to build a query with string concatination, which is then executed with the EXEC statement. If we take my first example, the built query would look like this:
    SELECT DISTINCT t0.MyId
    FROM MyTable t0
    INNER JOIN MyTable t1 ON t0.MyId = t1.MyId
    INNER JOIN MyTable t2 ON t1.MyId = t2.MyId
    WHERE t0.MyChars = 'AAA' AND t1.MyChars = 'BBB' AND t2.MyChars = 'CCC'
    It works, but I'm not very fond of building the query. Maintaining such things is always a pain. And it also might not be the most efficient and elegant way to do this.
    Since I can't think of any other way of doing this, I wanted to ask, if someone of you got an idea, if there is a better way to accomplish this.

    Let me give you a "cut and paste" I use in the SQL Server groups:
    1) The dangerous, slow kludge is to use dynamic SQL and admit that any random future user is a better programmer than you are. It is used by Newbies who do not understand SQL or even what a compiled language is. A string is a string; it is a scalar value like
    any other parameter; it is not code. Again, this is not just an SQL problem; this is a basic misunderstanding  of programming principles. 
    2) Passing a list of parameters to a stored procedure can be done by putting them into a string with a separator. I like to use the traditional comma. Let's assume that you have a whole table full of such parameter lists:
    CREATE TABLE InputStrings
    (keycol CHAR(10) NOT NULL PRIMARY KEY,
     input_string VARCHAR(255) NOT NULL);
    INSERT INTO InputStrings 
    VALUES ('first', '12,34,567,896'), 
     ('second', '312,534,997,896'),
     etc.
    This will be the table that gets the outputs, in the form of the original key column and one parameter per row.
    It makes life easier if the lists in the input strings start and end with a comma. You will need a table of sequential numbers -- a standard SQL programming trick, Now, the query, 
    CREATE VIEW ParmList (keycol, place, parm)
    AS
    SELECT keycol, 
           COUNT(S2.seq), -- reverse order
           CAST (SUBSTRING (I1.input_string
                            FROM S1.seq 
                             FOR MIN(S2.seq) - S1.seq -1) 
             AS INTEGER)
      FROM InputStrings AS I1, Series AS S1, Series AS S2 
     WHERE SUBSTRING (',' + I1.input_string + ',', S1.seq, 1) = ','
       AND SUBSTRING (',' + I1.input_string + ',', S2.seq, 1) = ','
       AND S1.seq < S2.seq
     GROUP BY I1.keycol, I1.input_string, S1.seq;
    The S1 and S2 copies of Series are used to locate bracketing pairs of commas, and the entire set of substrings located between them is extracted and cast as integers in one non-procedural step. The trick is to be sure that the right hand comma of the bracketing
    pair is the closest one to the first comma. The relative position of each element in the list is given by the value of "place", but it does a count down so you can plan horizontal placement in columns. 
    This might be faster now:
    WITH Commas(keycol, comma_seq, comma_place)
    AS
    (SELECT I1.keycol, S1.seq,
    ROW_NUMBER() OVER (PARTITION BY I1.keycol ORDER BY S1.seq)
    FROM InputStrings AS I1, Series AS S1
    WHERE SUBSTRING (',' || I1.input_string || ',' 
    FROM S1.seq 
    FOR 1) = ',' 
    AND S1.seq <= CHARLENGTH (I1.input_string))
    SELECT SUBSTRING(',' || I1.input_string || ','
    FROM C1.comma_place +1
    FOR C2.comma_place - C1.comma_place - 1)
    FROM Commas AS C1, Commas AS C2
    WHERE C2.comma_seq = C1.comma_seq + 1 
    AND C1.keycol = C2.keycol;
    The idea is to get all the positions of the commas in the CTE and then use (n, n+1) pairs of positions to locate substrings. The hope is that the ROW_NUMBER() is faster than the GROUP BY in the first attempt. Since it is materialized before the body of
    the query (in theory), there are opportunities for parallelism indexing and other things to speed up the works. 
    Hey, I can write kludges with the best of them, but I don't. You need to at the very least write a routine to clean out blanks, handle double commas and non-numerics in the strings, take care of floating point and decimal notation, etc. Basically, you must
    write part of a compiler in SQL. Yeeeech!  Or decide that you do not want to have data integrity, which is what most Newbies do in practice altho they do not know it. 
    A procedural loop is even worse. You have no error checking, no ability to pass local variables or expressions, etc. 
    CREATE PROCEDURE HomemadeParser(@input_string VARCHAR(8000))
    AS
    BEGIN
    DECLARE @comma_position INTEGER;
    CREATE TABLE #Slices
    (slice_value INTEGER);
    SET @input_string = @input_string + ','; --add sentinel comma
    SET @comma_position = CHARINDEX(',', @input_string); 
    WHILE @comma_position > 1
      BEGIN
      INSERT INTO #Slices (slice_value)
      VALUES(CAST(LEFT(@input_string, (@comma_position - 1)) AS INTEGER)); 
      SET @input_string = RIGHT(@input_string, LEN(@input_string)-@comma_position)
      SET @comma_position = CHARINDEX(',', @input_string)
      END;
    END;
    Better answer:
    http://www.simple-talk.com/sql/learn-sql-server/values()-and-long-parameter-lists/
    http://www.simple-talk.com/sql/learn-sql-server/values()-and-long-parameter-lists---part-ii/
    Do this with a long parameter list. You can pass up to 2000+ parameters in T-SQL, which is more than you probably will ever need. The compiler will do all that error checking that the query version and the procedural code simply do not have unless you write
    a full parser with the standard error codes. You can now pass local variables to your procedure; you can pass other data types and get automatic conversions, etc. In short, this is just good software engineering. 
    CREATE PROCEDURE LongList
    (@p1 INTEGER = NULL,
     @p2 INTEGER = NULL,
     @p3 INTEGER = NULL,
     @p4 INTEGER = NULL,
     @p5 INTEGER = NULL)
      x IN (SELECT parm
              FROM (VALUES (@p1), (@p2), (@p3), (@p4), (@p5)) AS X(parm)
            WHERE parm IS NOT NULL;
    You get all the advantages of the real compiler and can do all kinds of things with the values. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • JOptionPane Help Needed

    I basically have two problems with JOptionPane that I've searched for answers yet I haven't found an applicable answer for my application. Here they are:
    1) How can I get a JPasswordField in a JOptionPane to be given focus when the JOptionPane is displayed?
    2) How can I get the default key mappings for JOptionPane not to work or atleast make it where when a user hits "Enter", it activates the button that has current focus? I have a problem with highlighting "Cancel" and hitting "Enter".
    Here is my code being used for the JOptionPane:
    String pwd = "";
    int tries = 0;
    JPasswordField txt = new JPasswordField(10);
    JLabel lbl = new JLabel("Enter Password :");
    JLabel lbl1 = new JLabel("Invalid Password.  Please try again:");
    JPanel pan = new JPanel(new GridLayout(2,1));
    int retval;
    while(tries < 3 && !valid)
         if(tries > 0)
              pan.remove(lbl);
              pan.remove(txt);
              pan.add(lbl1);
              pan.add(txt);
              txt.setText("");
         else
              pan.add(lbl);
              pan.add(txt);
         retval = JOptionPane.showOptionDialog(DBPirate.appWindow,pan,
         "Password Input",                              JOptionPane.OK_CANCEL_OPTION,                              JOptionPane.QUESTION_MESSAGE,
         null,null,null);
         if(retval == JOptionPane.OK_OPTION)
              pwd = new String(txt.getPassword());
              setPassword(pwd);
              if(cryptoUtil.testPassword(pwd))
                   valid = true;
              else
                   tries += 1;
         else
              System.exit(0);
    }Any help would be appreciated. Thanks, Jeremy

    Hello Jeremy,
    I am not too happy with my code, for I think there must be something more efficient and elegant. But after trying in vain with KeyListener and ActionMap this is presently the only way I could do it.
    // JOptionPane where the <ret>-key functions just as the space bar, meaning the
    // button having focus is fired.
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    public class ReturnSpace extends JFrame implements ActionListener
    { JButton bYes, bNo, bCancel;
      JDialog d;
      JOptionPane p;
      public ReturnSpace()
      { setSize(300,300);
        setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
        Container cp= getContentPane();
        cp.setLayout(new FlowLayout());
        JButton b= new JButton("Show OptionPane");
        b.addActionListener(this);
        p=new JOptionPane("This is the question",
              JOptionPane.QUESTION_MESSAGE, JOptionPane.YES_NO_CANCEL_OPTION);
        d=p.createDialog(this,"JOptionPane with changing default button.");
        FocusListener fl= new FocusAdapter()
        { public void focusGained(FocusEvent e)
          {     JButton focusButton= (JButton)(e.getSource());
         d.getRootPane().setDefaultButton(focusButton);
        int i=1;
    //    if (plaf.equals("Motif")) i=2;
        bYes = (JButton)((JPanel)p.getComponent(i)).getComponent(0);
        bYes.addFocusListener(fl);
        bNo = (JButton)((JPanel)p.getComponent(i)).getComponent(1);
        bNo.addFocusListener(fl);
        bCancel = (JButton)((JPanel)p.getComponent(i)).getComponent(2);
        bCancel.addFocusListener(fl);
        cp.add(b);
        setVisible(true);
        d.setLocationRelativeTo(this);
      public void actionPerformed(ActionEvent e)
      { bYes.requestFocus();
        d.setVisible(true);
        Object value = p.getValue();
        if (value == null || !(value instanceof Integer))
        { System.out.println("Closed");
        else
        { int i = ((Integer)value).intValue();
          if (i == JOptionPane.NO_OPTION)
          { System.out.println("No");
          else if (i == JOptionPane.YES_OPTION)
          { System.out.println("Yes");
          else if (i == JOptionPane.CANCEL_OPTION)
          { System.out.println("Cancelled");
      public static void main(String argsv[])
      { new ReturnSpace();
    }Greetings
    Joerg

  • JAXP DOM - I can get children fine, just not nested children!

    Hi,
    I have a problem with JAXP dom. I cannot get the nested children associated.
    Here is my code:
    try {
    DOMParser parser = new DOMParser();
    String url = "file:/myxml.xml";
    parser.parse(url);
    Document doc = parser.getDocument();
    //Get the filename
    NodeList FileNameNodeList = doc.getElementsByTagName("filename");
    //Get the duration
    NodeList DurationNodeList = doc.getElementsByTagName("duration");
    //Get the headline
    NodeList HeadLineNodeList = doc.getElementsByTagName("headline");
    //Get the description
    NodeList DescriptionNodeList = doc.getElementsByTagName("description");
    //Get the keywords
    NodeList KeyWordsNodeList = doc.getElementsByTagName("keywords");
    //Get the link order
    //Read all values for every "File Name"
    for(int i=0; i<FileNameNodeList.getLength(); i++){          
    //Get the values
    Node FileNameNode = FileNameNodeList.item(i);
    Node DurationNode = DurationNodeList.item(i);
    Node HeadLineNode = HeadLineNodeList.item(i);
    Node DescriptionNode = DescriptionNodeList.item(i);
    Node KeyWordsNode = KeyWordsNodeList.item(i);
    //File name
    if(FileNameNode.getFirstChild() != null){
    Message += "<br><b>Filename:</b> " + FileNameNode.getFirstChild().getNodeValue();
    //Duration
    if(DurationNode.getFirstChild() != null){
    Message += "<b>Duration:</b> " + DurationNode.getFirstChild().getNodeValue();
    //Head line
    if(HeadLineNode.getFirstChild() != null){
    Message += "<b>Head Line:</b> " + HeadLineNode.getFirstChild().getNodeValue();
    //Description
    if(DescriptionNode.getFirstChild() != null){
    Message += "<b>Description:</b> " + DescriptionNode.getFirstChild().getNodeValue();
    //Key words
    if(KeyWordsNode.getFirstChild() != null){
    Message += "<b>Key Words:</b> " + KeyWordsNode.getFirstChild().getNodeValue();
    //Build the string
    String MessageTemp = Message;
    Message = "<font size=-1>" + MessageTemp + "</font>";
    return Message;
    } catch (Exception ex) {
    System.out.println(ex);
    return Message + ex.toString();
    Here is part of my XML File:
    - <article storyorder="1" pubdate="11/6/02 3:28:27 AM" source="MSNBC Video" topnews="1">
    <filename>vh</filename>
    <duration>00:01:58</duration>
    <headline>MSNBC&#146;s Video Headlines</headline>
    <description>The latest news from MSNBC.com</description>
    <keywords>headlines, news headlines</keywords>
    <photographer />
    <aspect>4:3</aspect>
    - <linkinfo>
    - <link order="1">
    <linkurl>http://www.msnbc.com/news/828323.asp</linkurl>
    <linktext>Republicans win control of Congress</linktext>
    </link>
    - <link order="2">
    <linkurl>http://www.msnbc.com/news/828325.asp</linkurl>
    <linktext>Republicans widen House majority</linktext>
    </link>
    - <link order="3">
    <linkurl>http://www.msnbc.com/news/828326.asp</linkurl>
    <linktext>Democrats retake some statehouses</linktext>
    </link>
    </linkinfo>
    - <categories>
    - <category id="News">
    - <topics>
    <topic>International</topic>
    <topic>US</topic>
    </topics>
    </category>
    </categories>
    </article>
    I can get filename, duration, etc. But I cannot get the multiple link URLs associated with the filename. How can I get that?
    Thanks

    Call getDocumentElement() on the document to obtain the root node. Then call getChildNodes() on the root node to get its children, each individual child can be accessed by looping, as follows :
    // Get the Document object.
    Node node = document.getDocumentElement();
    NodeList childNodes = node.getChildNodes();
    for(int i = 0; i < childNodes.getLength(); i++) {
        Node childNode = childNodes.item(i);
        // Perform some action on the node, or get its children to burrow
        // further down the DOM tree.
    }If you know how many levels you need to go down then you'll need that many nested for loops but recursion is a far more elegant solution.

Maybe you are looking for

  • G3 "Mini Tower", Need help.

    Ok, I had gotten a G3 a little while ago for free, along with a HP screen. I am quite tech-savvy, so I booted her up, os 9.2.2 was installed on it. It has 256 megs of ram, the original 10 gig hdd. The hdd gave out after 2 boots, so a I picked up a fr

  • My Apple id is not active or never used for app store?

    i don't lnow why i can't used my new apple id for doqnload app

  • Text file I/O

    Hello, I want to read a text file containing text data like this: 34.5 56.0 78.4 20.887 34.7 23.9987 etc... and get the double values: 34.5 56.0 etc. How do I do that ? Thank you

  • Interactive PDF increases video size

    Hello, I recorded some video tutorials on my Mac and scaled them down to 800 x 500 px in Premiere Pro CS5. The videos are looking good. After placing them in InDesign CS5 and exporting them as an interactive PDF, settings for original size, the quali

  • After I close Bookmarks list, a "white" copy stays on the screen

    If I open , then close Bookmarks , part of that list remains in white at bottom of window , which is annoying. Does not happen with IE 9