Getting a String from a web page

HI there, i want to store a String on a web page as a String in my Java program to manipulated, here is the web page:
ftp://weather.noaa.gov/data/observations/metar/stations/EGBB.TXT
Does anybody have an idea where i should start with such an operation, is it even possible?
Thank you!
oookiezooo

1) If you have a server, you can put a server side script up
that
returns the time quite easily. From Director, you would use
getNetText() to pull it down
2) If you click on the stage where there are no sprites,
there will be,
in the Property Inspector, a Display Template tab. There are
your
Titlebar options. Uncheck the ones you don't want to see
(like for
example "Close Box")

Similar Messages

  • Capture a string from a web page

    Is it possible after opening a web browser to capture a string and bring it into LabVIEW?  I want to open our company's intranet web page, go to our employee locator facility, capture an email address, and import it into a LabVIEW ap.  Will this work?  I've opened the browser; now I just need to know how to capture the string. If i just try copy/paste, it obviously won't work.  Any suggestions?

    Take a look at this example. With this you just need to parse the HTML.

  • Extracting info from a web page

    Hi,
         I m not sure if i m asking this question at the right forum.
    Can anyone tell me if there is a way to extract data from a web page.
    This means, say for example a web site Yahoo displays stock quotes
    updated or NASDAQ values almost in real time.
    Now if i want to get that information from the web page into one
    of my applications ,say, something that uses that data. Is there
    a way to do it?
    Just curious

    Yes, it's possible. You can use the java.net.URL object to connect to websites and download the html. Doing the coding is not that easy, and you should also be mindful of not redistributing data you've gotten from another site without permission

  • Help: getting data from a web page

    i have a jsp page which generates some strings. i pass these strings in to a login page on some server. the web page displays my login status. is it possible to read or get data from the web page?
    i have captured the header of a web page and modifying the header based on my generated strings.
    or jus say is it possible to read whats in a web page into a jsp page?
    thanks in advance for any help , assistance or redirection to a source where i can find help.

    hi,
    sorry for a poorly framed question.
    this is what i m trying to do.
    i call google with a header generated.
    now i want to read back the content in the google search result page onto my jsp page.
    possible?
    first.jsp calls google. i m using redirect (url)
    the url is modified based on user input
    now i want the links in the google page to be put up in my page itself. so i want to read the links there...
    Message was edited by:
    on_track

  • How to get the return values from a web page

    Hi all :
       how to get the return values from a web page ?  I mean how pass values betwen webflow and web page ?
    thank you very much
    Edited by: jingying Sony on Apr 15, 2010 6:15 AM
    Edited by: jingying Sony on Apr 15, 2010 6:18 AM

    Hi,
    What kind of web page do you have? Do you have possibility to for example make RFCs? Then you could trigger events (with parameters that could "return" the values) and the workflow could react to those events. For example your task can have terminating events.
    Regards,
    Karri

  • New Mac, fresh install of Mountain Lion When I click to open a .Pdf from a web page,while in Safar, I get a black window Nothing opens in Preview or in Acrobat No option to download

    New Mac, fresh install of Mountain Lion
    When I click to open a .Pdf from a web page,while in Safar, I get a black window
    Nothing opens in Preview or in Acrobat
    No option to download

    Open the Finder. From the Finder menu bar click Go > Go to Folder
    Type of copy paste the following:
    /Library/Internet Plug-Ins
    Click Go. If you see this file:  AdobePDFViewer.plugin
    Drag it to the Trash, empty the Trash.
    Quit and relaunch Safari.

  • When I click on a link from a web page opened in Safari I get a blank black screen. Why can't I see the contents of the page?

    When I click on a link from a web page opened in Safari I get a blank black screen. Why can't I see the contents of the page?

    Is this any link in any page, or one particular link in a particular page?

  • Why cant i get automatic translations of forain web pages like i do from chrome.

    why cant i get automatic translations of forain web pages like i do from chrome from Firefox

    You can look at extensions like these:
    *FoxLingo: https://addons.mozilla.org/firefox/addon/foxlingo-translator-dictionary/
    *ImTranslator: https://addons.mozilla.org/firefox/addon/imtranslator/
    *Translate This!: https://addons.mozilla.org/firefox/addon/translate-this/

  • I just downloaded AcrobatPro_10, from Acrobarts web page, it asks me a series number, which I dont have. How can i get it.?

    I just downloaded AcrobatPro_10, from Acrobarts web page, it asks me a series number, which I dont have. How can i get it.?

    Hi Samantha ,
    Please refer to the following link to get your serial number.
    https://helpx.adobe.com/x-productkb/global/find-serial-number.html
    Regards
    Sukrit Dhingra

  • How can I use Automator or AppleScript to get text from a web page and paste it in execl?

    I don't know how to make scripts or complexed automator workflows... that's why I'm asking.
    I'm trying to make a simple app or script to ask me what text to extract from a web page, like name, address and phone number of a web page and paste each one of these data in the righ cell of excel.
    I was thinking to promt a request from automator or an applescript to ask me which text to extract from the page or to look throught the HTML of the page to search for specific html tags, from which extracting text and then importing it, or paste it to the specified execl cell. Name in the name cell, address in the address cell and so on.
    Can somebody help me to make this script?
    If you know an alternative, like a software that already do this or another language to use, please tell.

    Try holding down the alt key as you mark the text to be copied. You can then copy columns to table text.

  • Posting an XML Variable to a BLS Transaction from a web page

    I am working in xMII 11.5 with all the latest service packs. I know this is probably a really basic question but I am stumped.
    I am trying to pass a multi row and multi column XML data set from a web page into a BLS transaction (actually, two of them) in order to populate the parameter. I want to use the Web Service interface to the transaction. I have tried using parameters on an Xacute Query in an Applet as well with an equal lack of success. I cannot persuade the transaction to see the incoming variable as an XML data type. I have tried encoding and decoding and string to xml conversions and nothing seems to successfully allow the data set to be seen withing the BLS as anything but a string. The String to XML action will not handle the number of columns in the dataset though it seems to work if the data set has only one column. The data set is formatted in the proper "Rowsets/Rowset/Row" format. I have considered writing the data to an XML file on the server (I know I can deal with that) but that is not acceptable in this application.
    Can someone share the secret with me?
    ...Sparks

    Parameter value:
    r1d1,r1d2,r1d3;r2d1,r2d2,r2d3;r3d1,r3d2,r3d2
    Pass thru String List to Xml Parser with delim ";"
    <Row>
    <Item>r1d1,r1d2,r1d3</Item>
    </Row>
    <Row>
    <Item>r2d1,r2d2,r2d3</Item>
    </Row>
    <Row>
    <Item>r3d1,r3d2,r3d2</Item>
    </Row>
    Repeat on each row/item and pass thru String List to Xml Parser with delim ","
    <Row>
    <Item>r1d1</Item>
    </Row>
    <Row>
    <Item>r1d2</Item>
    </Row>
    <Row>
    <Item>r1d3</Item>
    </Row>
    Of course, your columns aren't flat but they are easy to ref, to get "column 2" for example:
    StringListToXml_1.Output{/Rowsets/Rowset/Row[2]/Item}
    So now you have rows and columns. Assign your data to your BAPI structured as needed.
    We have passed complex XML via the SOAP interface in 11.5, but it involved some "hacks". Basically we passed the sterilized XML via a String Type Parameter, and then unserialized it inside the BLT.  
    I have been told on this board that there is a solution to passing XML data vie the SOAP interface using ref docs, but i have never personally seen a working example.

  • How ias integrate with Snacktory for getting main text from an html page

    Hi All,
    i am new to endeca and ias, i have an requirement, need to get main text from whole html page before ias save text to Endeca_Document_Text property,
    as ias save all text in page to endeca_document_text property, it is not ok for reading when show in web page, i use an third party API to filter out the main text from original page,
    now i want to save these text to endeca_document_text property,
    an another question,
    i get zero page when doing the logic of filtering main text from original html text in ParseFilter( HTMLMetatagFilter implements ParseFilter) using Snacktory.
    if only do little things, it will work fine, if do more thing, clawer fail to crawl page. any one know how to fix it.
    log for clawler.
    Successfully set recordstore configuration.
    INFO    2013-09-03 00:56:42,743    0    com.endeca.eidi.web.Main    [main]    Reading seed URLs from: /home/oracle/oracle/endeca/IAS/3.0.0/sample/myfirstcrawl/conf/endeca.lst
    INFO    2013-09-03 00:56:42,744    1    com.endeca.eidi.web.Main    [main]    Seed URLs: [http://www.liferay.com/community/forums/-/message_boards/category/]
    INFO    2013-09-03 00:56:43,497    754    com.endeca.eidi.web.db.CrawlDbFactory    [main]    Initialized crawldb: com.endeca.eidi.web.db.BufferedDerbyCrawlDb
    INFO    2013-09-03 00:56:43,498    755    com.endeca.eidi.web.Crawler    [main]    Using executor settings: numThreads = 100, maxThreadsPerHost=1
    INFO    2013-09-03 00:56:44,163    1420    com.endeca.eidi.web.Crawler    [main]    Fetching seed URLs.
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:56:52,889    10146    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:52,889    10146    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:52,890    10147    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:56:59,184    16441    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:59,185    16442    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:59,185    16442    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:57:07,058    24315    com.endeca.eidi.web.Crawler    [main]    Seeds complete.
    INFO    2013-09-03 00:57:07,090    24347    com.endeca.eidi.web.Crawler    [main]    Starting crawler shut down
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    Waiting for running threads to complete
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    Progress: Level: Cumulative crawl summary (level)
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    host-summary: www.liferay.com to depth 1
    host    depth    completed    total    blocks
    www.liferay.com    0    0    1    1
    www.liferay.com    1    0    0    0
    www.liferay.com    all    0    1    1
    INFO    2013-09-03 00:57:07,096    24353    com.endeca.eidi.web.Crawler    [main]    host-summary: total crawled: 0 completed. 1 total.
    INFO    2013-09-03 00:57:07,096    24353    com.endeca.eidi.web.Crawler    [main]    Shutting down CrawlDb
    INFO    2013-09-03 00:57:07,160    24417    com.endeca.eidi.web.Crawler    [main]    Progress: Host: Cumulative crawl summary (host)
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]   Host: www.liferay.com:  0 fetched. 0.0 mB. 0 records. 0 redirected. 4 retried. 0 gone. 0 filtered.
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]    Progress: Perf: All (cumulative) 23.6s. 0.0 Pages/s. 0.0 kB/s. 0 fetched. 0.0 mB. 0 records. 0 redirected. 4 retried. 0 gone. 0 filtered.
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]    Crawl complete.
    ~/oracle/endeca
    -======================================
    source code for parsefilter
    package com.endeca.eidi.web.parse;
    import java.util.Map;
    import java.util.Properties;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.log4j.Logger;
    import org.apache.nutch.metadata.Metadata;
    import org.apache.nutch.parse.HTMLMetaTags;
    import org.apache.nutch.parse.Parse;
    import org.apache.nutch.parse.ParseData;
    import org.apache.nutch.parse.ParseFilter;
    import org.apache.nutch.protocol.Content;
    import de.jetwick.snacktory.ArticleTextExtractor;
    import de.jetwick.snacktory.JResult;
    public class HTMLMetatagFilter implements ParseFilter {
        public static String METATAG_PROPERTY_NAME_PREFIX = "Endeca.Document.HTML.MetaTag.";
        public static String CONTENT_TYPE = "text/html";
        private static final Logger logger = Logger.getLogger(HTMLMetatagFilter.class);
        public Parse filter(Content content, Parse parse) throws Exception {
            logger.info("come into EndecaHtmlParser getParse");
            logger.info("come into HTMLMetatagFilter");
            //update the content with the main text in html page
            //content.setContent(HtmlExtractor.extractMainContent(content));
            parse.getData().getParseMeta().add("FILTER-HTMLMETATAG", "ACTIVE");
            ParseData parseData = parse.getData();
            if (parseData == null) return parse;
            extractText(content, parse);
            logger.info("update the content with the main text content");
            return parse;
        private void extractText(Content content, Parse parse){
            try {
                ParseData parseData = parse.getData();
                if (parseData == null) return;
                 Metadata md = parseData.getParseMeta();
                ArticleTextExtractor extractor = new ArticleTextExtractor();
                String sourceHtml = new String(content.getContent());
                JResult res = extractor.extractContent(sourceHtml);
                String text = res.getText();
                md.set("Endeca_Document_Text", text);
            } catch (Exception e) {
                // TODO: handle exception
        public static void log(String msg){
            System.out.println(msg);
        public Configuration getConf() {
            return null;
        public void setConf(Configuration conf) {

    but it only extracts URLs from <A> (anchor) tags. I want to be able to extract URLs from <MAP> tags as wellGee, do you think you could modify the code to check for "Map" attributes as well.
    Can someone maybe point a page containing info on the HTML toolkit for me?It's called the API. Since you are using the HTMLEditorKit and an ElementIterator and an AttributeSet, I would start there.
    There is no such API that says "get me all the links", so you have to do a little work on your own.
    Maybe you could use a ParserCallback and every time you get a new tag you check for the "href" attribute.

  • Is there any way to read XML directly from a Web Page ??

    i have a url, which on sending request, shows XML in browser.
    Now i need to read this XML in browser and then manipulate it according to my need and display it on another page.
    actually the process is. :
    1) i have to first retrieve an xml from other site. (XML will only be shown in browser)
    2.)then i have to read the Xml and show it in according to my requirements.
    Is there any way to read XML directly from a Web Page ??
    is their logic to accomplish this.
    e.g in Servlet i can do somewhat like this :
    String wholeXml=Somemethod(url);
    Please Advice

    the average Java XML parser will accept an InputStream, so just open an URLConnection to the webpage, get the inputstream from it and feed that inputstream to the XML parser. If the URL has valid XML data, it will get parsed without problems.

  • How to read text from a web page

    I want to read text from a web page. Can any body tell me how to do it.

    Ok i tell you detail. visit the site " http://seriouswheels.com/" you will a index from A to Z which are basically car name index i want to read each page get car name and its model and store it in data base. I you can provide me the code i will be very thankful.

  • How to Open an Oracle Apps Screen from a web page

    Hi,
    We have requirement for Opening an Oracle Application screen (say sales order form) directly from a web page.
    I could get the URL of the required screen, but the URL contains an ICX_TICKET number, which is generated dynamically by Oracle Apps. So I can't use a static URL for this.
    Do you know how I can use or generate an ICX_TICKET in runtime? My user will have an active Oracle Application screen opened along with web page. He want to navigate to Oracle Apps screen from Web page. Hopes this makes the requirement more clear.
    Thanks for your time,
    Aneesh

    Hi Helios,
    I have identified a function to generate ICX_ticket. By appending this ticket, I am able to open the Oracle Apps screen. Now, is there in implication on the security side, if I go ahead this way?
    Function
    fnd_gfm.one_time_use_store(icx_sec.GetSessionCookie(CZ_CF_API.ICX_SESSION_TICKET),300,'FORMS_APPLET')
    Anyways, I am raising an SR as u suggested.
    Thank you,
    Aneesh

Maybe you are looking for

  • Adobe Acrobat 9 Download files

    I have software discs and a license for Adobe Acrobat 9 pro windows. I don't have a disc drive on my new computer. is there a link where i can download this older edition of acrobat? i can't seem to find it on the website.

  • Rest query with filter sub-string (contains, not eq)

    Hi, how can I do a REST-query with a filter for a string-column which is only a sub-string? I only found EQ, for example $filter=(name EQ 'tom') which compares the whole string. But I would like to query records, where a string-column CONTAINS a spec

  • More than 1 ipod on computer

    my daughter has an ipod and i just bought one. When i connect my ipod to the computer it starts downloading all of her songs to my ipod. How do i prevent this from happening?   Windows XP  

  • Drill across in OBIEE Reports

    I have data as follows D1 D2 ... DN F1 F2 where the cols F1 and F2 are 1 and 0. When i drill to the detail level page for F1 = 1 i get the rows curresponding to the same. So far so good. Now when i drill to the same page with F2 = 1, it super imposes

  • Reader 10.1.5 ignores default zoom setting

    I'm running Windows 8.  Set the default zoom to 125% and always get 67%.