Parsing URL issue

**** WARNING ADULT RELATED LINKS ON THIS ISSUE ****
I am having an issue with Firefox parsing part of the URL.
Example: http://trial.jenndoll.com/bonus.php?fc=2
Firefox is parsing away the ?fc=2 from the above URL. I have tested other browser and this functions properly.

Did you ever find a way to provision a resource using SPML?
I'm facing the same problem at the moment..
Regards,
Tine

Similar Messages

  • DOMParser.parse(URL) hangs

    Anytime I call DOMParser.parse(URL) where URL is of type "http://", the parse call hangs (as near as I can tell) indefinitely. Are URLs of this type not supported? Is there a work around to this problem?

    No. Within the same class, the following DOES work:
    DOMParser dp = new DOMParser();
    dp.setErrorStream(new PrintWriter(errs));
    // Set Schema Object for Validation
    dp.setXMLSchema((XMLSchema)((new XSDBuilder()).build(schema.location)));
    Note that schema.location is a String like "http://www.wherever.com/file.xsd" which points to the web server that is hanging on DOMParser.parse(URL);

  • Has anybody used DocumentBuilder.parse(URL)

    Hi friends
    Has anybody used DocumentBuilder.parse(URL) for make document object.
    I have the following piece of code
    <code>
    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
    DocumentBuilder builder = factory.newDocumentBuilder();
    System.out.println("| Start | Fetch xml"); // ------------ 1
    Document document = builder.parse("http://some url that gives back XML");
    System.out.println("| Stop | Fetch xml"); // ------------- 2
    </code>
    Now the problem is .. once in a while the code will hang at point 1 and will never reach point 2. Exception handling has been done.. but there are no exceptions being logged. The code simply hangs.
    Please let me know if you have also faced the same or similiar problem.
    Thanking in anticipation
    Partha.

    Is it similar with a file URL instead a http URL.
    Use
    Document document = builder.parse("file://c:/xmlFile");
    instead of
    Document document = builder.parse("http://some url that gives back XML");

  • DocumentBuilder timeout - parse(url)

    Is there anyway to configure a timeout on parsing from a url with DocumentBuilder?
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    final DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance();
    final DocumentBuilder docBuilder = docFactory.newDocumentBuilder();
    Document document = docBuilder.parse(url);I can't seem to find a reference anywhere

    tony_murphy wrote:
    Is there anyway to configure a timeout on parsing from a url with DocumentBuilder?
    I can't seem to find a reference anywhereI think it is not the job of DocumentBuilder, DocumentBuilder is capable of parsing number of sources for building Document.
    Try other way with parsing InputStream. Check it out below:
    import java.io.*;
    import java.net.*;
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    try
         URL url = new URL (strURL);     // Getting URL to invoke
         HttpURLConnection urlCon = (HttpURLConnection) url.openConnection ();     // Opening connection with the URL specified
         urlCon.setReadTimeout (1000);     // Set Read Time out in milliseconds, Setting 1 second as read timeout
         urlCon.connect ();     //Connecting
         InputStream iStream = urlCon.getInputStream ();     //Opening InputStream to Read
         DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();     //Building Document Builder Factory
         DocumentBuilder builder = factory.newDocumentBuilder();     // Building Document Builder
         doc = builder.parse(iStream);     // Parsing InputStream
    } catch (Exception ex) {
         // TODO: Exception Handling
    Note: Perform proper Exception Handling.
    Thanks,
    Tejas

  • Datasocket Server looses connections after a few hours "Parsing URL"

    Hi there,
    I need some help cause the DataSocket server is doing some bad stuff...!
    The server runs on a WinNT 4.0 platform as a datalogger. It works fine, but after a few hours it seems the server looses connection somehow, the datasocket status of the frontpanel elements tells: "Connecting: Parsing URL" .
    The server has about 5 DINT variables and about 3 clusters containing INT variables (size about 30).
    I really ran out of ideas aof what happens on this "old" machine, maybe someone had some preblems like me and solved them....
    Thank for hints and tipps!

    Hello,
    what service packs for Windows NT have you installed?
    Datasocket server crashes after a short period of time when diagnostic
    window is open (Tools->Diagnostics). Disable the "Auto-Refresh"
    option in the Diagnostic Dialog from the Options menu.
    regards
    P.Kaltenstadler
    National Instruments

  • Problem in abap web dynpro URL issue

    Hello All,
    I have created a web dynpro for calling different tables with F4 help dynamically through ALV webdynpro, I can update, delete/insert row in ALV for multiple tables with one webdynpro program.
    Now problem is user wants to create url for different tables and keep it in the portal using with this webdynpro program like that i need to create more than 20' url's and keep it in portal with current created web dynpro program.
    Could please provide any solution for this issue.
    Thanks in advance.
    Regards
    Sri

    You could try passing the table name as a URL parameter. 
    i.e.
    http://<server>:<port>/sap/bc/webdynpro/sap/<app_name>?table=MARA
    This can be read in your WD application and use it to determine the table to display.  There is a heap of content out there on the internet that explains how to do this.
    https://help.sap.com/saphelp_nw04s/helpdata/en/2f/e7574174c58547e10000000a1550b0/content.htm
    Regards,
    Katan

  • SOAP.request  URL issue

    Im using the below webservice script in my PDF, it is working fine.
    but, If the URL is wrong then the below code is not executing the remaining statement. If the URL is wrong or the server is down, I hav to show message, How should I catch that error and show the right message.
    If the request is fine I am getting the resultResp value.
    Please help me to fix this issue.
    var response = SOAP.request(
    cURL: cURL,
    oRequest: { "http://www.ibm.com/industries/financialservices/": {my:MYMessage}},
    cAction: "http://services.com/AppService/AppInvocation",
    oRespHeader: oResultHeader
    var responseString = "http://services.com/message:MYMessage";
    var resultResp = response["MYMessage"]["response"];

    hi ,
    Please any idea about this.
    Thanks,

  • RNIF URL Issue

    Hi All,
    I have an issue in one of my RNIF scenarios.
    When I try to ping my customer's URL, I am getting the following error:
    ALSO TO BE NOTED IS THAT WE DONT USE ANY DIDGITAL CERTIFICATES.
    There is a problem with this website's security certificate.
    The security certificate presented by this website was issued for a different website's address.
    Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server.
      We recommend that you close this webpage and do not continue to this website.
      Click here to close this webpage.
      Continue to this website (not recommended).
         More information
    If you arrived at this page by clicking a link, check the website address in the address bar to be sure that it is the address you were expecting.
    When going to a website with an address such as https://example.com, try adding the 'www' to the address, https://www.example.com.
    If you choose to ignore this error and continue, do not enter private information into the website.
    For more information, see "Certificate Errors" in Internet Explorer Help.
    Could anyone pls help me out here?

    It is very clear that the certificate being used by that server is not the genuine one.  It might be issued to some other server and they must have altered few fields to make it usable on that server.
    Ask the client to install a genuine certificate on the server.
    -Vijendra

  • Go url issue - no value passed through in filter

    hi,
    I have issue with go url
    I insert the following formula but the filter is not applied (the value does not come through)
    in the target report I made a condition for "- Invoice date HIR"."INV Month" as is prompted; still nothing;
    it does navigate, and opens the target report with options md, but filter is not applied;
    I think i went trough most of posts on the forum but am still unable to correct the code
    Id appreciate any suggestions
    '<--a href= saw.dll?GO&path=/shared/DRILL_DOWN
    &Options=md
    &Action=Navigate
    &P0=1
    &P1=eq
    &P2= "- Invoice date HIR"."INV Month"
    &P3=’||Periods."Month"||’
    style=”text-decoration:none;”>'
    ||'LINK'
    ||
    '<--/a>'

    hi
    changed;
    now the code looks following but OBI would not accept it at all as formula
    ''||'LINK'||''
    i get an errror
    [nQSError: 10058] A general error has occurred. [nQSError: 27002] Near <=>: Syntax error [nQSError: 26012] . (HY000) ...
    however; I kicked out this 'style...' entirely to make things easier
    ''||'LINK'||''
    it does drill but with the previous error (now without 'decoration''...)
    Error getting drill information: SELECT "- Invoice date HIR"."INV Month" saw_0, FROM "TEST" WHERE "- Invoice date HIR"."INV Month" = ''||Periods."Month"||''

  • RSS 2.0 parser loop (issue with load unload)

    Hello,
    Im using a GNU class from
    http://sunouchi.31tools.com/ASRssParser/#usage
    to parse rss2.0 files.
    In the example on the site it parses one file, but i need to
    parse more files and put them in an array.
    Im not an experienced AS writer, but i just put an loop
    around rssObj.load
    The onload function should be called when the fetching is
    done.
    The result now i only get the results from the last rss...
    Any solving Ideas.
    Cheers
    code :
    // import package
    import com.cybozuLab.rssParser.*;
    // create FetchingRss class instance
    // set the target RSS URL
    var xmlsource = new Array();
    xmlsource[0] = "
    http://www.nieuwsblad.be/Regio/WestVlaanderen/feed.xml";
    xmlsource[1] = "
    http://www.sportwereld.be/Tennis/feed.xml";
    xmlsource[2] = "
    http://rss.vrtnieuws.net/nieuwsnet_master/versie2/systeem/rss/nnII_nieuws_hoofdpunten/inde x.xml";
    for (i=0;i<2;i++){
    Trace(xmlsource
    var rssObj = new FetchingRss(xmlsource); // change exist
    URL
    var thisObj = this;
    rssObj.load();
    // define the function when loading is completed
    rssObj.onLoad = function( successFL, errMsg )
    if( successFL )
    // call function for listing summary
    thisObj.listSummary();
    else
    trace( errMsg );
    // start loading
    function listSummary()
    var rssData:Object = rssObj.getRssObject();
    for( var i=0; i<rssData.channel.item.length; i++ )
    var post:Object = rssData.channel.item
    trace( post.title.value );
    trace( post.description.value.substr( 0, 60 ) + "\n" );

    Here's a link for boot loop recovery.
    Since it's a new install you probably don't have anything to recover though:::
    https://supportforums.cisco.com/docs/DOC-26689#comment-14559
    Reading the system requirements for the Express version, one would think any version of ESXi is supported.
    Looking at the system requirements for the Standard and Pro versions though it appears 5.1 is the latest version.
    VMWare Version
    Express - ESXi 4.1 or later
    Standard - ESXi 5 or ESXi 5.1
    Pro - ESXi 5 or ESXi 5.1
    Try with ESXi 5.1 and I'll bet it works!

  • Parsing URL query parameters

    Hi all, I keep running into situations where I need to parse the parameters from the query string of a URL. I'm not using Servlets or anything like that.
    After much frustration with not being able to find a decent parser, I wrote one myself, and thought I would offer it to others who might be fighting the same thing. If you use these, please keep the source code comments in place!
    Here it is:
    * Written by: Kevin Day, Trumpet, Inc. (c) 2003
    * You are free to use this code as long as these comments
    * remain in tact.
    * Parse parameters from the query segment of a URL
    * Pass in the query segment, and it returnes a MAP
    * containing entries for each Name.
    * Each map entry is a List of values (it is legal to
    * have multiple name-value pairs in a URL query
    * that have the same name!
         static public Map getParamsFromQuery(String q) throws InvalidParameterException{
              * Query, q, can be of the form:
              * <blank>
              * name
              * name=
              * name="value"
              * name="value"&
              * name="value"&name2
              * name="value"&name2="value2"
              * name="value"&name="value"
              * name="value & more"&name2="value"
              Map params = new HashMap();
              StringBuffer name = new StringBuffer();
              StringBuffer val = new StringBuffer();
              StringBuffer out = null;
              boolean inString = false;
              boolean readingName = true; // are we reading the name or the value?
              int i = -1;
              int qlen = q.length();
              out = name;
              while (i < qlen){
                   char c = ++i < qlen ? q.charAt(i) : '&';
                   if (inString){
                        if (c != '\"')
                             out.append(c);
                        else
                             inString = false;
                   } else if (c == '&') {
                        String nameStr = cleanEscapes(name.toString());
                        String valStr = cleanEscapes(val.toString());
                        List valList = (List)params.get(nameStr);
                        if (valList == null){
                             valList = new LinkedList();
                             params.put(nameStr, valList);
                        valList.add(valStr);
                        name.setLength(0);
                        val.setLength(0);
                        out = name;
                   } else if (c == '=') {
                        out = val;
                   } else if (c == '\"') {
                        inString = true;
                   } else {
                        out.append(c);
              if (inString) throw new InvalidParameterException("Unexpected end of query string " + q + " - Expected '\"' at position " + i);
              return params;
         static private String cleanEscapes(String s){
              try {
                   return URLDecoder.decode(s, "UTF-8");
              } catch (UnsupportedEncodingException e) {
                   e.printStackTrace();
              return s;
    You'll also need to create a new Exception class called InvalidParameterException.
    Cheers,
    - Kevin

    Because javax.servlet.* is not included in the
    standard Java distribution. I am writing my own
    mini-web server interface for applications and don'tSounds like an interesting project, have you thought about implementing the Servlet API (or a subset)? It is well known to developers, and I dont think it would require that much extra work, but that would depend on mini mini is of course =).
    want to add that dependency nastiness just to get a
    parser...OK, I was just curious.

  • Solution to parse URL query parameters

    I would like to parse query parameters in a URL GET request. I want to know if there is an efficient solution.
    e.g. http://www.google.com?search=red&query=blue
    I want to get "search", "red", "query", "blue" strings. I am not sure whether using StringTokenizer is the efficient solution.
    Thanks
    Jawahar

          StringTokenizer st = new StringTokenizer("http://www.google.com?search=red&query=blue","?&=",true);
          Properties params = new Properties();
          String previous = null;
          while (st.hasMoreTokens())
             String current = st.nextToken();
             if ("?".equals(current) || "&".equals(current))
                //ignore
             }else if ("=".equals(current))
                params.setProperty(URLDecoder.decode(previous),URLDecoder.decode(st.nextToken()));
             }else{
                previous = current;
          params.store(System.out,"PARAMETERS");

  • Parse url

    Hi all,
    I am looking for some example code on how to select out certain data from a URL and save to a text file. I can easily save the entire page to a text file but I only need some of the data- for example, the information between tag �A� and tag �B�?
    Much thanks in advance!
    Alex

    so what you want is not parsing an URL, but parse html content and save it...
    anyway, use some kind of html parsing liibrary... like htmlparser - http://htmlparser.sourceforge.net/

  • Parse rss issue

    I've been trying to parse this rss feed
    http://www.economicnews.ca/cepnews/wire/rss/custom?u=camagazine&p=39d7g7d9
    Here is the code I'm using:
    <!--- Retrieve the RSS document --->
    <cfhttp url="
    http://www.economicnews.ca/cepnews/wire/rss/custom?u=camagazine&p=39d7g7d9"
    method="get">
    <cfhttpparam type="Header" name="Accept-Encoding"
    value="deflate;q=0">
    <cfhttpparam type="Header" name="TE"
    value="deflate;q=0">
    </cfhttp>
    <!--- Validation flag --->
    <cfset XMLVALIDATION = true>
    <cftry>
    <!--- Create the XML object --->
    <cfset objRSS = xmlParse(cfhttp.filecontent)>
    <cfcatch type="any">
    <!--- If the document retrieved in the CFHTTP
    is not valid set the validation flag to false. --->
    <cfset XMLVALIDATION = false>
    </cfcatch>
    </cftry>
    <cfif XMLVALIDATION>
    <!--- If the validation flag is true continue parsing
    --->
    <!--- Set the XML Root --->
    <cfset XMLRoot = objRSS.XmlRoot>
    <!--- Retrieve the document META data --->
    <cfset doc_title = XMLRoot.channel.title.xmltext>
    <cfset doc_link = XMLRoot.channel.link.xmltext>
    <cfset doc_description =
    XMLRoot.channel.description.xmltext>
    <cfset doc_content = XMLRoot.channel.content.xmltext>
    <!--- Output the meta data in the browser --->
    <!-- <cfoutput>
    <b>Title</b>: #doc_title#<br/>
    <b>Link</b>: #doc_link#<br/>
    <b>Description</b>:
    #doc_description#<br/><br/>
    </cfoutput> -->
    <!--- Retrieve the number of items in the channel --->
    <cfset Item_Length = arraylen(XMLRoot.channel.item)>
    <!--- Loop through all the items --->
    <cfloop index="itms" from="1" to="2">
    <!--- Retrieve the current Item in the loop --->
    <cfset tmp_Item = XMLRoot.channel.item[itms]>
    <!--- Retrieve the item data --->
    <cfset item_title = tmp_item.title.xmltext>
    <cfset item_link = tmp_item.link.xmltext>
    <cfset item_description = tmp_item.description.xmltext>
    <cfset item_content = tmp_item.content.xmltext>
    <!--- Output the items in the browser --->
    <cfoutput>
    <a href="#item_link#"
    target="_blank"><strong>#item_title#</strong></a><br/>
    #item_description#<br/><br/><br />
    #item_content#
    </cfoutput>
    </cfloop>
    <cfelse>
    <!--- If the validation flag is false display error
    --->
    Invalid XML/RSS object!
    </cfif>
    But it gives me the following error:
    Element CHANNEL.CONTENT.XMLTEXT is undefined in XMLROOT.
    The error occurred in
    E:\inetpub\wwwroot\cica\shane_upload_folder\rss_parse.cfm: line 31
    29 : <cfset doc_link = XMLRoot.channel.link.xmltext>
    30 : <cfset doc_description =
    XMLRoot.channel.description.xmltext>
    31 : <cfset doc_content =
    XMLRoot.channel.content.xmltext>
    I've checked and re-checked, I've done dumps and the content
    is there, so I'm not sure what the heck it doesn't like. Thoughts
    anyone?

    SirPainkiller wrote:
    > Ohhh I see. Ok so if we look under ITEM there is title,
    link, description, content:encoded, etc.. So now the question is
    how to I reference the content:encoded that is under item??
    >
    > Shane
    When accessing that type of RSS feed you need to use array
    notation
    rather then dot notation to get at the elements.
    I.E.
    <cfset doc_content =
    XMLRoot['channel']['item']['content:encoded']['xmltext']>

  • Calling Webservice from Adobe form - Webservice URL  Issue

    Dear Friends,
      I have developed a webservice and calling it from an Adobe form. I have downloaded the WSDL file from tcode SOAMANAGER. When I create a data connection from the Adobe form  I use this WSDL file and the form elements gets created automatically and I drag and drop them into the form. The issue is when I click on the Submit(Execute) button the Webservice URL is always pointed to the client from where the file was downloaded and its hard-coded ( for example if i download the wsdl file from client 300 then the URL would be http://<location>/sap/bc/srt/rfc/sap/z_web_getmat/300/z_web_getmat/z_web_getmat). So if i execute the form from client 200 its not working. How to make this URL dynamic so that the webservice gets executed from the client from where this form is called. Please advice.
    Regards
    Sapient

    Hi,
    You have to handle it in your form on submit button calling the web service:
    Write below java script code at submit event to change the URL at run time:
    var tempsoapAddress = xfa.connectionSet.DataConnection.getElement("soapAddress").value;
    var tempwsdladdress = xfa.connectionSet.DataConnection.getElement("wsdlAddress").value;
    var ServerPath = body.systemConfig.system.rawValue;
    var client = <Get Client from a data attribute>
    var Soap_PreServerPort =  "http://";
    var Soap_PostServerPort =  "/sap/bc/soap/rfc?sap-client=";
    var SoapAddress = Soap_PreServerPort + ServerPath + Soap_PostServerPort + client ;
    var Wsdl_PreServerPort =  "http://";
    var Wsdl_PostServerPort1 =  "/sap/bc/soap/wsdl11?services=ZBAPI_PO_CREATE2&amp;sap-client=";
    var wsdlAddress =  Wsdl_PreServerPort + ServerPath + Wsdl_PostServerPort1 + client ;
    xfa.connectionSet.DataConnection.getElement("soapAddress").value = SoapAddress;
    xfa.connectionSet.DataConnection.getElement("wsdlAddress").value = wsdlAddress;
    xfa.connectionSet.DataConnection.execute(0);
    xfa.connectionSet.DataConnection.getElement("soapAddress").value = tempsoapAddress;
    xfa.connectionSet.DataConnection.getElement("wsdlAddress").value = tempwsdladdress;
    xfa.connectionSet.DataConnection = null;
    Change the variable wsdlAddress as per your requirement. The above code is just a sample.
    For getting the client pass it in a datasource variable attribute at time of downloading the form. At time of submit get the value of that variable and use it to form the url.
    Regards,
    Vaibhav

Maybe you are looking for

  • Valid Acrobat Pro XI still requires activation after 30 days

    I already asked a similar question in a different sub-forum, but i was recommended to make a new thread here. The problem is this: We need to deploy Acrobat Pro XI to computers that are 100 offline. We used the Adobe Customization Wizard XI to make a

  • Problem home buton on my Iphone

    Can anyone help me? Since I downloaded the latest available download, my home button refuses to work. I have tried restoring factory settings, but it still doesn't work, any suggestions? I have been advised to seek technical help (at a cost) but as t

  • ITunes isn't working

    I have my iPod set to manually manage my music. But ever since I downloaded the newest version of iTunes I can't put music on my iPod from my computer or edit anything on the device. Whenever I try a little box pops up that says my iPod couldn't be s

  • Web page composer new element at top

    Hey, Is it possible to add a new element at the top of the page instead of under the elements that are already there. Standaard it ads the elements at the bottom but I would like them to be at the top. Thanks

  • Can I use wifi with my iphone 4 without turning off cell data?

    Just got my first iphone, the iphone 4. I am having issues turning on my wifi. It won't even let me turn it on. Any thoughts?