DOMParser.parse(URL) hangs

Anytime I call DOMParser.parse(URL) where URL is of type "http://", the parse call hangs (as near as I can tell) indefinitely. Are URLs of this type not supported? Is there a work around to this problem?

No. Within the same class, the following DOES work:
DOMParser dp = new DOMParser();
dp.setErrorStream(new PrintWriter(errs));
// Set Schema Object for Validation
dp.setXMLSchema((XMLSchema)((new XSDBuilder()).build(schema.location)));
Note that schema.location is a String like "http://www.wherever.com/file.xsd" which points to the web server that is hanging on DOMParser.parse(URL);

Similar Messages

  • AxQTOControlLib.AxQTControl.URL Hangs

    I have an application that opens up m4a files and gets tag information from those files (artist, album, etc).  I'm finding that when I start to process several of these files my application will hang setting the AxQTOControlLib.AxQTControl.URL parameter (either to a file name or "").  It doesn't happen on the same file, but it will always happen, usually by the 4th or 5th file.
    Any ideas?

    No. Within the same class, the following DOES work:
    DOMParser dp = new DOMParser();
    dp.setErrorStream(new PrintWriter(errs));
    // Set Schema Object for Validation
    dp.setXMLSchema((XMLSchema)((new XSDBuilder()).build(schema.location)));
    Note that schema.location is a String like "http://www.wherever.com/file.xsd" which points to the web server that is hanging on DOMParser.parse(URL);

  • Parse method hangs

    I am trying to to parse a file using DocumentBuilder. The parse() method hangs for no reason. There are no errors or exceptions thrown. I even used a StringReader but in vain. Pl see code below:
    The file is read correctly and i could even print the XML string out.
    System.out.println("Parsing XML file from performParsing():" +s);
              Document doc = null;
              DocumentBuilderFactory factory = null;
              DocumentBuilder builder = null;
              try{
                   factory = DocumentBuilderFactory.newInstance();
                   System.out.println("Parsing XML factory:" +factory);
                   builder = factory.newDocumentBuilder();
                   System.out.println("Parsing XML builder:" +builder);
                   URL url = new URL(s);
                   URLConnection urlConnection = url.openConnection();
                   InputStream in = urlConnection.getInputStream();
                   String XMLStr = getStringReaderFromFile(in); //private method that gives a string
                   StringReader stReader = new StringReader(XMLStr);
                   InputSource ins = new InputSource(stReader);
         doc = builder.parse(ins);--->IT HANGS HERE

    Not an expert on this but presumably you have the properties set up correctly.
    From javadoc ...
    DocumentBuilderFactory uses the system property javax.xml.parsers.XmlDocumentParserFactory to find the class to load. So you can change the parser by calling:
    System.setProperty("javax.xml.parsers.XmlDocumentParserFactory",
    "com.foo.myFactory");

  • Has anybody used DocumentBuilder.parse(URL)

    Hi friends
    Has anybody used DocumentBuilder.parse(URL) for make document object.
    I have the following piece of code
    <code>
    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
    DocumentBuilder builder = factory.newDocumentBuilder();
    System.out.println("| Start | Fetch xml"); // ------------ 1
    Document document = builder.parse("http://some url that gives back XML");
    System.out.println("| Stop | Fetch xml"); // ------------- 2
    </code>
    Now the problem is .. once in a while the code will hang at point 1 and will never reach point 2. Exception handling has been done.. but there are no exceptions being logged. The code simply hangs.
    Please let me know if you have also faced the same or similiar problem.
    Thanking in anticipation
    Partha.

    Is it similar with a file URL instead a http URL.
    Use
    Document document = builder.parse("file://c:/xmlFile");
    instead of
    Document document = builder.parse("http://some url that gives back XML");

  • ArrayIndexOutOfBoundsException during DOMParser.parse(...) operation

    Please provide assistance with clarifying any limitations of the DOMParser.parse() operations. Please let me know if there is an alternative approach to what I am doing below. The details of my situation follow:
    I am using Visual Cafe 3 with the Oracle XML parser 2.0.2.6 to parse an XML string using the DOMParser parse(Reader), parse(InputSource), parse(InputStream) operation in order to retrieve a DOMDocument object.
    I have taken several approaches all of which result in the following exception:
    "java.lang.ArrayIndexOutOfBoundsException: 16388"
    This error appears to be raised by XMLReader:
    oracle\xml\parser\v2\XMLReader(1411)... The java source is unavailable to debug the code.
    I have also changed the XML string to a simple innocuous string. But I still get the same message. The literal string value is as follows: "<?xml version="1.0"?><EMPLIST><EMP><ENAME>MARTIN</ENAME></EMP><EMP><ENAME>SCOTT</ENAME></EMP></EMPLIST>"
    The code fragments I have used to perform the parse() operations are given below:
    //Reader approach
    StringReader xmlReader = new StringReader( inXMLString );
    parser.parse( xmlReader );
    // InputSource approach
    InputSource source = new InputSource( xmlReader );
    parser.parse( source );
    // InputStream approach
    ByteArrayInputStream byteStream = new ByteArrayInputStream( inXMLString.getBytes() );
    parser.parse( byteStream );
    Any assistance would be greatly appreciated.
    null

    Please provide assistance with clarifying any limitations of the DOMParser.parse() operations. Please let me know if there is an alternative approach to what I am doing below. The details of my situation follow:
    I am using Visual Cafe 3 with the Oracle XML parser 2.0.2.6 to parse an XML string using the DOMParser parse(Reader), parse(InputSource), parse(InputStream) operation in order to retrieve a DOMDocument object.
    I have taken several approaches all of which result in the following exception:
    "java.lang.ArrayIndexOutOfBoundsException: 16388"
    This error appears to be raised by XMLReader:
    oracle\xml\parser\v2\XMLReader(1411)... The java source is unavailable to debug the code.
    I have also changed the XML string to a simple innocuous string. But I still get the same message. The literal string value is as follows: "<?xml version="1.0"?><EMPLIST><EMP><ENAME>MARTIN</ENAME></EMP><EMP><ENAME>SCOTT</ENAME></EMP></EMPLIST>"
    The code fragments I have used to perform the parse() operations are given below:
    //Reader approach
    StringReader xmlReader = new StringReader( inXMLString );
    parser.parse( xmlReader );
    // InputSource approach
    InputSource source = new InputSource( xmlReader );
    parser.parse( source );
    // InputStream approach
    ByteArrayInputStream byteStream = new ByteArrayInputStream( inXMLString.getBytes() );
    parser.parse( byteStream );
    Any assistance would be greatly appreciated.
    null

  • DOMParser parse & namespaces

    I've notice that when I parse a stream using (java) DOMparser.parse, where that steam contains namespace prefixes, the parser returns with errors even though I've set the validationmode to false. Is this intended behavior, and if so how do I circumvente the parser from trying to resolve namespaces, but just check wellformedness?

    You can't turn off the parser's namespace support. If you have invalid namespace prefixes, then an XML 1.0 Parser with XML Namespaces support (which ours is) should raise an error.
    Resolving namespace prefixes is not part of validation, it's part of wellformedness checking.

  • DocumentBuilder timeout - parse(url)

    Is there anyway to configure a timeout on parsing from a url with DocumentBuilder?
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    final DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance();
    final DocumentBuilder docBuilder = docFactory.newDocumentBuilder();
    Document document = docBuilder.parse(url);I can't seem to find a reference anywhere

    tony_murphy wrote:
    Is there anyway to configure a timeout on parsing from a url with DocumentBuilder?
    I can't seem to find a reference anywhereI think it is not the job of DocumentBuilder, DocumentBuilder is capable of parsing number of sources for building Document.
    Try other way with parsing InputStream. Check it out below:
    import java.io.*;
    import java.net.*;
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    try
         URL url = new URL (strURL);     // Getting URL to invoke
         HttpURLConnection urlCon = (HttpURLConnection) url.openConnection ();     // Opening connection with the URL specified
         urlCon.setReadTimeout (1000);     // Set Read Time out in milliseconds, Setting 1 second as read timeout
         urlCon.connect ();     //Connecting
         InputStream iStream = urlCon.getInputStream ();     //Opening InputStream to Read
         DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();     //Building Document Builder Factory
         DocumentBuilder builder = factory.newDocumentBuilder();     // Building Document Builder
         doc = builder.parse(iStream);     // Parsing InputStream
    } catch (Exception ex) {
         // TODO: Exception Handling
    Note: Perform proper Exception Handling.
    Thanks,
    Tejas

  • Error message when using DOMParser.parse(file)

    When I try to use the DOMParser.parse(file) to parse an invalid XML file against a XML schema, with the schema_validation feature set on, error message saying the XML file is invalid because it violates some constraints set in the schema. But I cannot find out how the error message is print. It is not caught as exception. May I know how can I detect the error in the code instead of viewing it as output?

    The error message I got is like the followings:
    [Error] abc3.xml:2:310: cvc-pattern-valid: Value '' is not facet-valid with respect to pattern '(\(\d\d\d\)-)?[\d]{8}' for type 'phoneNoType'.
    [Error] abc3.xml:2:310: cvc-type.3.1.3: The value '' of element 'mobile' is not valid.
    [Error] abc3.xml:2:318: cvc-pattern-valid: Value '' is not facet-valid with respect to pattern '[^@]+@[^\.]+\..+' for type 'emailAddressType'.
    [Error] abc3.xml:2:318: cvc-type.3.1.3: The value '' of element 'email' is not v
    alid.
    May I know how can I catch these errors?

  • Datasocket Server looses connections after a few hours "Parsing URL"

    Hi there,
    I need some help cause the DataSocket server is doing some bad stuff...!
    The server runs on a WinNT 4.0 platform as a datalogger. It works fine, but after a few hours it seems the server looses connection somehow, the datasocket status of the frontpanel elements tells: "Connecting: Parsing URL" .
    The server has about 5 DINT variables and about 3 clusters containing INT variables (size about 30).
    I really ran out of ideas aof what happens on this "old" machine, maybe someone had some preblems like me and solved them....
    Thank for hints and tipps!

    Hello,
    what service packs for Windows NT have you installed?
    Datasocket server crashes after a short period of time when diagnostic
    window is open (Tools->Diagnostics). Disable the "Auto-Refresh"
    option in the Diagnostic Dialog from the Options menu.
    regards
    P.Kaltenstadler
    National Instruments

  • Program to check existence of a url hangs!!!

    Here is my code to check whether a url exists ...
    import java.net.*;
    import java.io.*;
    import java.util.*;
    public class urlExists
         urlExists()
         public int exists(String urlname) throws IOException
              URL url=null;
              try
                url=new URL(urlname);
              catch(Exception e)
                HttpURLConnection connexion = (HttpURLConnection)(url.openConnection());
                if(connexion.getResponseCode()==HttpURLConnection.HTTP_NOT_FOUND)
                        return(0);
                else return(1);
    }I am using this class many a times but it keeps on hanging for some reason or another. Could anyone please explain me the reason?
    Thanks in advance.
    Regards
    kbhatia

    you will have to use a Socket:
    public String getResponse(String url) throws Exception{
            int timeout   = 30000; // 30 second timeout
            String inputLine = null;
            Socket socket    = null;
            BufferedReader reader = null;
            Writer writer    = null;
            StringBuffer buf = new StringBuffer();
            try{
                URL server = new URL(url);
                int port = server.getPort();
                if (port < 0)
                    port = 80;
                socket = new Socket(server.getHost(),port);
                writer = new OutputStreamWriter(socket.getOutputStream(), "US-ASCII");
                writer.write("GET " + server.toExternalForm() + " HTTP/1.0\r\n");
                writer.write("Host: " + server.getHost() + ':' + port + "\n\n");
                writer.flush();
                socket.setSoTimeout(timeout);
                reader = new BufferedReader(new InputStreamReader(socket.getInputStream(),"UTF-8"));
                String line = reader.readLine();
                if (line != null && line.startsWith("HTTP/")){
                    int space = line.indexOf(' ');
                    String status = line.substring(space + 1,space + 4);
                    if (!status.equals("200")){
                        throw new Exception("HTML Error: + " status);
                    while((line = reader.readLine()) != null)
                        buf.append(line).append("\n");
                else{
                    throw new Exception("Bad protocol");
            catch (InterruptedIOException e) {
                throw new Exception("Read timeout expired");
            finally{
                // close the reader, writer, and socket here
            return buf.toString();      
    }

  • Parsing URL query parameters

    Hi all, I keep running into situations where I need to parse the parameters from the query string of a URL. I'm not using Servlets or anything like that.
    After much frustration with not being able to find a decent parser, I wrote one myself, and thought I would offer it to others who might be fighting the same thing. If you use these, please keep the source code comments in place!
    Here it is:
    * Written by: Kevin Day, Trumpet, Inc. (c) 2003
    * You are free to use this code as long as these comments
    * remain in tact.
    * Parse parameters from the query segment of a URL
    * Pass in the query segment, and it returnes a MAP
    * containing entries for each Name.
    * Each map entry is a List of values (it is legal to
    * have multiple name-value pairs in a URL query
    * that have the same name!
         static public Map getParamsFromQuery(String q) throws InvalidParameterException{
              * Query, q, can be of the form:
              * <blank>
              * name
              * name=
              * name="value"
              * name="value"&
              * name="value"&name2
              * name="value"&name2="value2"
              * name="value"&name="value"
              * name="value & more"&name2="value"
              Map params = new HashMap();
              StringBuffer name = new StringBuffer();
              StringBuffer val = new StringBuffer();
              StringBuffer out = null;
              boolean inString = false;
              boolean readingName = true; // are we reading the name or the value?
              int i = -1;
              int qlen = q.length();
              out = name;
              while (i < qlen){
                   char c = ++i < qlen ? q.charAt(i) : '&';
                   if (inString){
                        if (c != '\"')
                             out.append(c);
                        else
                             inString = false;
                   } else if (c == '&') {
                        String nameStr = cleanEscapes(name.toString());
                        String valStr = cleanEscapes(val.toString());
                        List valList = (List)params.get(nameStr);
                        if (valList == null){
                             valList = new LinkedList();
                             params.put(nameStr, valList);
                        valList.add(valStr);
                        name.setLength(0);
                        val.setLength(0);
                        out = name;
                   } else if (c == '=') {
                        out = val;
                   } else if (c == '\"') {
                        inString = true;
                   } else {
                        out.append(c);
              if (inString) throw new InvalidParameterException("Unexpected end of query string " + q + " - Expected '\"' at position " + i);
              return params;
         static private String cleanEscapes(String s){
              try {
                   return URLDecoder.decode(s, "UTF-8");
              } catch (UnsupportedEncodingException e) {
                   e.printStackTrace();
              return s;
    You'll also need to create a new Exception class called InvalidParameterException.
    Cheers,
    - Kevin

    Because javax.servlet.* is not included in the
    standard Java distribution. I am writing my own
    mini-web server interface for applications and don'tSounds like an interesting project, have you thought about implementing the Servlet API (or a subset)? It is well known to developers, and I dont think it would require that much extra work, but that would depend on mini mini is of course =).
    want to add that dependency nastiness just to get a
    parser...OK, I was just curious.

  • Solution to parse URL query parameters

    I would like to parse query parameters in a URL GET request. I want to know if there is an efficient solution.
    e.g. http://www.google.com?search=red&query=blue
    I want to get "search", "red", "query", "blue" strings. I am not sure whether using StringTokenizer is the efficient solution.
    Thanks
    Jawahar

          StringTokenizer st = new StringTokenizer("http://www.google.com?search=red&query=blue","?&=",true);
          Properties params = new Properties();
          String previous = null;
          while (st.hasMoreTokens())
             String current = st.nextToken();
             if ("?".equals(current) || "&".equals(current))
                //ignore
             }else if ("=".equals(current))
                params.setProperty(URLDecoder.decode(previous),URLDecoder.decode(st.nextToken()));
             }else{
                previous = current;
          params.store(System.out,"PARAMETERS");

  • BPEL url hangs on Node2 in ACTIVE/PASSIVE setup

    Hi All,
    First of all, thanks in advance for your help!!!
    Question:
    =======
    a) What are files I need to change in BPEL for ACTIVE/PASSIVE setup?
    b) What are the exact entries to be changed?
    Background:
    =========
    HOST A and HOST B connected to shared device.
    Env --> Sun Solaris 10
    Database repository --> HOST C and 10.2.0.3.0
    BPEL hosts --> HOST A and HOST B, using BIG IP for VIP
    BPEL version --> 10.1.3.1.0
    1) Created BPEL repository on 10.2.0.3.0 -- Went fine
    2) Install J2ee and Web Services on shared device @ HOST A -- Went fine
    3) Install BPEL on shared device @HOST A--> went fine
    4) Able to do opmnctl startall/stolall on HOST A
    5) Able to access EM/BPELAdmin/BPELConsole on HOST A
    6) Able to do opmnctl startall/stopall on HOST B
    Issue
    ====
    The url on HOST B hangs for EM and BPELConsole, however BPELAdmin workks fine.
    For EM and BPELConsole it asks for usernamer and password and the next screen hangs for ever.
    Question:
    =======
    a) What are files I need to change for ACTIVE/PASSIVE setup?
    Note:
    ====
    I read stuff in Metalink and google, however there are NO exact steps for ACTIVE/PASSIVE setup/
    I changed the file jgroups-protocol.xml, but in Vain
    Thanks
    Regards
    Natrajan

    Anyone is using Active-Passive in BPEL environment??

  • Parse url

    Hi all,
    I am looking for some example code on how to select out certain data from a URL and save to a text file. I can easily save the entire page to a text file but I only need some of the data- for example, the information between tag �A� and tag �B�?
    Much thanks in advance!
    Alex

    so what you want is not parsing an URL, but parse html content and save it...
    anyway, use some kind of html parsing liibrary... like htmlparser - http://htmlparser.sourceforge.net/

  • How to parse URL Data into an NSString Array in iphone application

    Hi Every one
    I am newbie to iphone programming. I am having problem with reading and displaying the data into the table view. My application has to be designed like this. There is a csv file in the server machine and I have to access that URL line by line. Each line consists of 8 comma separated values. Consider each line has first name, second name and so on. I have to parse the data with comma and a newline and store them in an array of first name, second name array an so on. The next thing is I have to set first name second name combined and must be displayed in the UITableView. Can anyone provide me with an example of how to do it? I know I am asking the solution but I encountered a problem in connection methods separately and parsing simultaneously. You help is much appreciated.
    Thanks

    What does that have to do with a URL?
    The only thing that doesn't sound good is "array of first name" and "second name array". For each row, extract all the field and store them in an NSDictionary. Add a derived field consisting of first name concatenated with last name. That will be easy to display in a table.

Maybe you are looking for

  • Loading in each mclip one at a time

    Hi, I have a site that in which each section is in it's own mclip. I simply want each mclip to load in when it is navigated to rather than the whole swf loading in at the start. I have the following actionscript to make the whole swf load in but need

  • I haven't updated my iPod in a year...now it won't sync...

    Okay... I basically haven't downloaded any music to my 20GB color/photo iPod in over a year... I finally plugged it into my computer and iTunes didn't recognize my iPod...so I reset my iPod...then it said I needed to update my software...so I downloa

  • Data rate conversion for f4fpackager

    Hello, I am woundering what Adobe suggest in the following matter. So, when I use f4f packager command line tool,i supply --bitrate value, but I know it in bps (say 500,000 bps). And f4fpackager wants it in kbps. So, should I divide by 1024 or 1000?

  • Systemctl start mysqld unrecognized option --ssl-ca=

    I have spent most of the last 18 hours reading manuals, searching the web and searching these forums for an answer, but have not found anything that worked or that even matched the issue I am having. So here goes. Issue: Entering the command: $ sudo

  • Adobe illustrator a cesse de fonctionner

    Bonjour, bonsoir, En esperant posté au bon endroit. Voila j'ai la suite Adobe créative suite 5.5, donc Illustrator cs 5.1, le logiciel démarre bien, je peut me balader dans les différents menu mais lorsque je veut créer un nouveau fichier le logiciel