HTML Page Question(s)

I can't for the life of me remember where to go to change the <TITLE> tag on Oracle Financials Apps 11.5.10.2 html pages. Someone please refresh my memory.
thanks!
:)

You can do it through System -> profiles
Sam
http://appstech-sam.blogspot.com

Similar Messages

  • Question on passing value to an HTML page and dynamic links

    Hi,
    I just started using Oracle APEX and have the following questions:
    1) I try to pass an order number from a report page to a blank HTML page that is supposed to be a shipping label. I cannot have the HTML page resolve the placeholder within HTML code to actual order number. I have tried %P2_ORDER_ID%, #P2_ORDER_ID# or &P2_ORDER_ID. Any suggestions?
    2) I have a column with link in a report page and I want the link to go to different pages based on the value of that column. How should I do that?
    Thanks!
    Mike.

    Hi All,
    Im still trying to think about this the right way.
    If I add a servlet to the C2 project and use HTTP post from the HTML page and have the 2 variables in the servlet, could I then either set the variables on the session bean in order to make them available to my searchResults.jsp bean methods?
    Regards
    Jim

  • Question on saving html page into excel

    hi all,
    i have one problem. Is there any ways that i can save a html page into excel??? that is to save the data displayed on the html page into excel spreadsheet.
    thanks for your advice.

    Even if you need to click a button to parse the page to a parser, you still need to transmit the page back to the server to process, unless you want the client to do it, in which case he/she will have to need a program.
    Given by the views floating around in this forum, the idea is that doing a re-query is as costly if not less costly than re-submitting the html page. By relegating to the back-end processing of data using the database, you can ensure the data is up-to-date and you can control the programs executing. That is, you are not worried about user submitting garbage html page for you to process.
    If you data size is not large, there is always the session memory to use but it's not really wise as your server's memory is much costlier than network bandwidth. It's the whole server becomes slow versus your DB connection slow.
    If you really need to use the re-submitting of your html page back to the server to process, you can always use hidden form values in html to post your data back. However, I can't remember whether there's a limit to the amount of data you can post. Think it's 1024K or something.

  • FIREFOX 26 changed the 'post crash' page with the list of windows and pages. It was a real HTML page with links for each page. Now it isn't (and blows)

    * You changed the page that comes up after a crash - the one which shows the windows and pages that were up before the
    crash.
    This used to be a real HTML page and it isn't any more. THis choice was pure isiocy consider how peopel used that page every day (try taking to your users for a change).
    The pages listed on that 'post crash page' used to be actual LINKS (you could right click them &
    manually open them in another tab - and most peope DID that every day). You could also (and I did this a lot) drag a second
    copy of the page, into a new tab (to keep track of all the pages I had not wanted to open)
    Now the pages are no longer links. You cannot right-click them.
    The thing in the probser is no longer a page that I can drag into a new tab.
    Roll the version back and throw this one in the bin...and have a good long talk with your developers about the definition of
    'STUPIDITIY'
    Then I try to type anything in the addres bar it is suppose to repoond with the history of old things I have types in the past, or
    search for what I type (and in some cases I think it tyes to convert it intoa URL). However it no longer does any of these
    things. When I type in the ADDRESS BAR ONLY, I do not get all of my letters to appear. I have to type into a notepad or
    into the search and to copy and paste to go to a URL. Nothing that is typed into the address bad responds normally at all
    anymore, and I am fairly certain there are no new addons on this machine at all. It does not matter which things I disable. It
    still does this. W of course points the finger back at a change to FIREFOX. I wich I could just sitct with one stable release
    forever but the MOZILLA folk thing it is best to force peopel awy from a working broswer release to a horrible one (due to it
    being out of date).
    FIREFOX 26.0 has 'issues' (ie new *features*/bugs)
    1) History is no longer accurate. My Proof? This machine is the primary one for the entire family (the only PC working). It is
    logged in with same user every time and never has its history cleared. It now January 7 so I ought to have a list of all of last
    months browing available to me.
    However, according to FIREFOX history, in all of December 2013, the entire family only went to 51 primary URL's. None of
    the official TV sites I use to get episodes are listed. None of the official movie sites I use are listed. The primary URL for
    ebay is not listed. Only 1 out of our 4 weather sites that I use (at least once a week) are listed. Only 1 of our 3 FINANCE
    sites is listed. There ought to be several; hundred root URLs listed.
    Please fix history as this change seriously "blows greasy chunks"
    PROBLEM (a stupid change in the new FIREFOX version)
    FOr a long time, after a crash you got a useful page (a real HTML page) which gave you a list of all of your last sessions
    'winwos and tabs' that were open when the browser crashed. there were certain ways of using this page that are no longer
    able to be done.
    The old method was wonderful as it had this behavior :
    *** The old method for displaying your 'Recovered Tabs' allowed you to :
    a) right click an individual item and open it in a tab without getting rid of that lovely window of your previous session of
    'recovered windows and tabs '.
    2) drag the URL for the entire window of 'recovered windows and tabs ' to a new tab (to make a second copy) so that you
    could select just a few of them to open as a group, and stil have the old list handy.
    You can no longer do either of these things. The 'recovered windows and tabs ' page no longer has links in it and can no longer be dragged, so you cant select a few of them to use, and keep the rest around for later. NOW - once you choose which pages to open the window is gone forever (can can't get a second copy).
    People used to make a copy of the page for later use (with a drag)
    we also used to open pages with a right click (which no longer functions
    This new method seriously blows big greasy chunks. A parge loss in function has occurred.
    Put things back as they were. 26 is full of terrible changes that NOBODY likes. It also has a lot of bugs (history is not reliable at all)

    (1) Firefox's built-in post-crash page has not been a real HTML page for a long time (for example, from the time of Firefox 22, see: [https://support.mozilla.org/en-US/questions/968212 Want to save LOTS of versions of "Restore Session.xht" from the "oops ..." page for later use]). If you had this working differently with Firefox 25, that might have been created by an extension.
    You can check to see whether extensions are disabled or need an update on the Add-ons page. Either:
    * Ctrl+Shift+a
    * orange Firefox button (or Tools menu) > Add-ons
    In the left column, click Extensions. The disabled extensions cluster toward the bottom of the list. To poll for updates, use the "gear" button above the list and choose Check for Updates.
    If you used the Reset feature (or Firefox automatically did a reset due to some problem during upgrading), you will need to reinstall missing extensions. The reset feature creates a folder on the desktop named Old Firefox Data. Do you have that folder? There may be data you can recover from it.
    (2) There are many ways for history to get cleared, both internal to Firefox and external. Could you double-check your Privacy settings?
    orange Firefox button (or Tools menu) > Options > Privacy
    * The "Firefox will" drop-down says Remember History: Firefox shouldn't be clearing history, but an add-on or external software could do it
    * The "Firefox will" drop-down says Use custom settings for history: inspect the "Clear history when Firefox closes" setting to make Firefox isn't set to clear history. Also check your add-ons and consider external software.
    Firefox normally accumulates months of history. However, some of Firefox's database sizes are based on disk space available. If your hard drive is very full, Firefox might reduce the amount of history stored.

  • Can I use data from Servlet in my static html page?

    First of all, I can NOT use jsp because of web server's restriction.
    I have a servlet which will give me some image links in html file via doGET and doPOST method. I also need the sizes of the images and compress the images if too large.
    My question is how I can pass the image sizes to the html page and how I can use them in html files.
    Please advise me some solutions to this problem.

    Yeah, you have 2 choices:
    1) Change your web server to one that allows JSP.
    2) Re-build the JSP system from scratch so that the one you make will work in your server. This would involve changing your so called static HTML to have markings (like <% %> tags) where you should insert the values you need to insert. You would then have a servlet that reads the 'static' HTML, parses our the insertion tags, and inserts the values. It would then stream the results back to the user.
    Of course, your HTML is not really static, it is dynamic because the values you are inserting are capable of changing.
    If you don't want to upgrade the server to one that supports JSPs (if yours really doesn't), the have fun making your own system.

  • Where html pages located, which created by using APEX?

    Hi,
    I am using APEX 2.1(Oracle XE) to develop an app, would like to know where html pages stored?
    In other word, how can I put the existing html pages into APEX web server? I don't want to run two web servers on same computer.
    This question may relate to update APEX from 2.1 to 2.2, how to do the upgrade, my APEX is included in Oracle XE.
    Thanks.

    Hello,
    >I followed the method to view :8080/i, it's
    interesting, all stuffs are XE's but not find my own
    pages built by APEX.
    >I guess I'm confused now. You say "built by APEX"
    APEX doesn't build static pages so there is no way to
    access them statically.
    I thought what you wanted is you have static pages
    that you want to serve off an XE instance without
    using another webserver. By uploading your static
    html files into the /i/ directory you can serve them
    from there using the embedded webserver.
    If you want to server your pages from within the
    framework itself that at least means uploading them
    into the Shared Components and then linking to them
    in an iframe or frame to get valid pages or at the
    most getting the content sections into the database
    and pulling them into a region on a page.
    Your question is a little vague, can you be more
    detailed on what exactly you want and what you
    expect.
    CarlSorry for my unclear questions. But your answers are very right to me. Yes, I want to put my existing static html files into the /i/ directory, so I can serve them from there using the embedded webserver. I will upload them into Shared Components to try, I appreciate if you tell me how to make the link between these html pages to iframe. I am newby to this.
    My second question is that I use APEX built some test pages, but I can not find them in :8080/i virual fold.

  • How ias integrate with Snacktory for getting main text from an html page

    Hi All,
    i am new to endeca and ias, i have an requirement, need to get main text from whole html page before ias save text to Endeca_Document_Text property,
    as ias save all text in page to endeca_document_text property, it is not ok for reading when show in web page, i use an third party API to filter out the main text from original page,
    now i want to save these text to endeca_document_text property,
    an another question,
    i get zero page when doing the logic of filtering main text from original html text in ParseFilter( HTMLMetatagFilter implements ParseFilter) using Snacktory.
    if only do little things, it will work fine, if do more thing, clawer fail to crawl page. any one know how to fix it.
    log for clawler.
    Successfully set recordstore configuration.
    INFO    2013-09-03 00:56:42,743    0    com.endeca.eidi.web.Main    [main]    Reading seed URLs from: /home/oracle/oracle/endeca/IAS/3.0.0/sample/myfirstcrawl/conf/endeca.lst
    INFO    2013-09-03 00:56:42,744    1    com.endeca.eidi.web.Main    [main]    Seed URLs: [http://www.liferay.com/community/forums/-/message_boards/category/]
    INFO    2013-09-03 00:56:43,497    754    com.endeca.eidi.web.db.CrawlDbFactory    [main]    Initialized crawldb: com.endeca.eidi.web.db.BufferedDerbyCrawlDb
    INFO    2013-09-03 00:56:43,498    755    com.endeca.eidi.web.Crawler    [main]    Using executor settings: numThreads = 100, maxThreadsPerHost=1
    INFO    2013-09-03 00:56:44,163    1420    com.endeca.eidi.web.Crawler    [main]    Fetching seed URLs.
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:56:52,889    10146    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:52,889    10146    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:52,890    10147    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:56:59,184    16441    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:59,185    16442    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:59,185    16442    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:57:07,058    24315    com.endeca.eidi.web.Crawler    [main]    Seeds complete.
    INFO    2013-09-03 00:57:07,090    24347    com.endeca.eidi.web.Crawler    [main]    Starting crawler shut down
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    Waiting for running threads to complete
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    Progress: Level: Cumulative crawl summary (level)
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    host-summary: www.liferay.com to depth 1
    host    depth    completed    total    blocks
    www.liferay.com    0    0    1    1
    www.liferay.com    1    0    0    0
    www.liferay.com    all    0    1    1
    INFO    2013-09-03 00:57:07,096    24353    com.endeca.eidi.web.Crawler    [main]    host-summary: total crawled: 0 completed. 1 total.
    INFO    2013-09-03 00:57:07,096    24353    com.endeca.eidi.web.Crawler    [main]    Shutting down CrawlDb
    INFO    2013-09-03 00:57:07,160    24417    com.endeca.eidi.web.Crawler    [main]    Progress: Host: Cumulative crawl summary (host)
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]   Host: www.liferay.com:  0 fetched. 0.0 mB. 0 records. 0 redirected. 4 retried. 0 gone. 0 filtered.
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]    Progress: Perf: All (cumulative) 23.6s. 0.0 Pages/s. 0.0 kB/s. 0 fetched. 0.0 mB. 0 records. 0 redirected. 4 retried. 0 gone. 0 filtered.
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]    Crawl complete.
    ~/oracle/endeca
    -======================================
    source code for parsefilter
    package com.endeca.eidi.web.parse;
    import java.util.Map;
    import java.util.Properties;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.log4j.Logger;
    import org.apache.nutch.metadata.Metadata;
    import org.apache.nutch.parse.HTMLMetaTags;
    import org.apache.nutch.parse.Parse;
    import org.apache.nutch.parse.ParseData;
    import org.apache.nutch.parse.ParseFilter;
    import org.apache.nutch.protocol.Content;
    import de.jetwick.snacktory.ArticleTextExtractor;
    import de.jetwick.snacktory.JResult;
    public class HTMLMetatagFilter implements ParseFilter {
        public static String METATAG_PROPERTY_NAME_PREFIX = "Endeca.Document.HTML.MetaTag.";
        public static String CONTENT_TYPE = "text/html";
        private static final Logger logger = Logger.getLogger(HTMLMetatagFilter.class);
        public Parse filter(Content content, Parse parse) throws Exception {
            logger.info("come into EndecaHtmlParser getParse");
            logger.info("come into HTMLMetatagFilter");
            //update the content with the main text in html page
            //content.setContent(HtmlExtractor.extractMainContent(content));
            parse.getData().getParseMeta().add("FILTER-HTMLMETATAG", "ACTIVE");
            ParseData parseData = parse.getData();
            if (parseData == null) return parse;
            extractText(content, parse);
            logger.info("update the content with the main text content");
            return parse;
        private void extractText(Content content, Parse parse){
            try {
                ParseData parseData = parse.getData();
                if (parseData == null) return;
                 Metadata md = parseData.getParseMeta();
                ArticleTextExtractor extractor = new ArticleTextExtractor();
                String sourceHtml = new String(content.getContent());
                JResult res = extractor.extractContent(sourceHtml);
                String text = res.getText();
                md.set("Endeca_Document_Text", text);
            } catch (Exception e) {
                // TODO: handle exception
        public static void log(String msg){
            System.out.println(msg);
        public Configuration getConf() {
            return null;
        public void setConf(Configuration conf) {

    but it only extracts URLs from <A> (anchor) tags. I want to be able to extract URLs from <MAP> tags as wellGee, do you think you could modify the code to check for "Map" attributes as well.
    Can someone maybe point a page containing info on the HTML toolkit for me?It's called the API. Since you are using the HTMLEditorKit and an ElementIterator and an AttributeSet, I would start there.
    There is no such API that says "get me all the links", so you have to do a little work on your own.
    Maybe you could use a ParserCallback and every time you get a new tag you check for the "href" attribute.

  • Need help with creating template. Changes are not going through to index.html page

    Hi all,
    I have an issue with my template that I am creating and also a question about creating template Regions (Repeating and Editable).
    Somehow my changes to my index.dwt are not changing my index.html page.
    Also my other question is: For my top navigation bar and left navigation bar links, do I need to select and define each individual button or link as Repeating/Editable Region? or can I just select the whole navigation bar (the one on the top) etc...
    Below are my steps for creating my template...I am kinda fairly new to using DW and this is my first attempt to making a template following the DW tutorial CD that came with DW CS3.
    I appreciate any help with this...regards, Dano
    -Open my index.html file
    -File/save as template
    -Save
    -update links - yes
    -Select Repeating and Editable Regions (I selected the whole top navigation bar and selected Repeating Region and Editable Region, same with the left side navigation links)
    -File close all
    -Open the index.dwt
    -Save as and selected the index.html and chose to overide it..
    When I make changes to my index.dwt it is not changing the index.html
    I feel that I am missing some important steps here.....
    Website address
    www.defenseproshop.com

    Figured out

  • How can I force Firefox to properly format all html pages to a pdf without losing top and right hand margins?

    Setting up page printout format for pdf on a Mac OS/X Snow Leopard Mac Pro. Printer is HP 8500A OfficeJet Pro Plus bought after July 14, 2011 serial number- CN141CK7QM
    When I print to file as a pdf I am getting the first page correctly but subsequent pages are getting clipped along the top and the right margins.
    I don't have this problem with my IBook 4.0
    The issue is formatting an html page to a pdf when the print request is asking for a pdf or when the print request is to the printer.
    The url to my HP Office jet Pro 8500A Plus is:
    mdns://Officejet%20Pro%208500%20A910%20%5B530E4F%5D._pdl-datastream._tcp.local./?bidi
    I have a IBook G4 and I do not have any of this problem.
    I have specified a default paper size of "USLetter".
    This printing issue is a new one that occurred since I started the Firefox updates after Firefox 6.0 series.
    I am checking with HP on this issue. Prior to the HP8500A (installed in July 2011) I did have a HP6180 Officejet for several years without encountering such a conflict.
    So my second question is; Are there Firefox printing conflicts with the HP 8500A printers?
    EHW 091211

    Joanna,
    It seems that Illy (still) moves in (ever new) mysterious ways.
    I was surprised to see that you got away with your French word, but the hat part must have hidden the first part from the nanny filter. The first part by itself would have become asterisks.

  • How to Pass the data from the class to the BSP application(ie. .html page)

    hi
    i had created one .html page. This page is getting opened after clicking on one of the buttton of the toolbar(say Bank Data).
    Now the problem is , to show the data of the bank which user had entered in the PCUI application on the .html page.
    Please help me to solve the problem

    Thanks for your answer, I tried the solution 2, I create "Submit" button, and ser the mapping scope to  be "All data rows", it only works when I select at least one row, otherwise the data would not be passed.
    Another question is I have serveral imported table parameter, for each table I have one "submit" event, I want these tables to be submitted at the same time, but if I click the submit button in one table toolbar, I can only submit the table data which has a submit button clicked, for other tables, the data is not passed, how can I achieve it?
    Thanks.

  • Noob Needs Help - Importing Html Pages into "My Site"

    Greetings
    I am sure this is a noob question that may have been asked, but I couldn't find it by searching so here goes.
    I jumped right into Dream Weaver, following an online tutorial... and have a decent webpage but I must have skipped some steps or missed something and I would hate to backtrack and do it all over again...
    When I began, I just started working on a HTML page... and now that its done, I want to make it a template etc...
    But I never created a "site" as far as SITE > NEW SITE etc.
    So now that I have, I need to tell DW that "These pages are included... these images are included, all that neat SPRY stuff I have done needs to be "in the site".
    I am looking for a button that says, "Hey dummy, you need to select the pages that are included in your site" but can't find it...
    Any help?
    Thanks, in Advance
    Greg
    www.revfan.com

    Creating  your first web site in DW CS5 -
    http://www.adobe.com/devnet/dreamweaver/articles/first_website_pt1.html
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb

  • How to Unload and Reload a Composition into the same HTML page?

    TLDR:
    When loading multiple Compositions into a html page via AdobeEdge JavaScript, how do I remove/delete/unload a loaded composition so that it can be added/loaded to the same page again later? I am open to options not mentioned here, but this is done externally, not within Edge's limited code abilities (working with multiple comps on one page).
    *Note: I am making comps in Edge then loading multiple within my own JavaScript/HTML external pages. This programming is NOT done within Edge's code.
    Overview:
    I could have more than 50 compositions to load into my page, but only one is displayed at a time. In order to make this easier on the user, I load 5 into the page in divs with display: none. I set the id of the first one to an id that has my css to show the content properly. When clicking next, that div is set back to an ID that has hidden css values. Going forward and back works the same. This functions properly. This puts all the obvious preloading in the beginning so the user doesn't have to wait again, as comps are preloaded in the background while they are viewing other comps.
    As the user moves farther, I drop the first div and add one on the end with the next comp. This works fine. I can go forward all day, no problem by removing the first div element and adding one to the end. This should make memory not horrible, since theoretically, the comps would be removed as needed...
    Problem:
    The problem is if the user wants to go back. If I drop the last div and add one to the front, the comp (which was loaded and dismissed earlier) cannot be re-loaded. It also cannot be re-added, even if the div is cloned and re-added later.
    This is the code I am using to load the comps:
    $.getScript('comp_' + i + '_edgePreload.js', function() {});
    Options:
    The options I have thought of (and not found a way to make them work) in order of preference:
    Fully unload the comp from memory, that way when I make the call to load the comp, it will act as if it has never been loaded before. (Is there a way to unload a comp completely? I haven't found one, but the API is sorely incomplete) **I want this one to work very much.
    Ok, maybe you can't actually unload a loaded comp. Can you re-add it to a new div? I have printed out every option I could thru AdobeEdge's interface, and cannot find a way to re-make a div the way it was. Heck, I'd be happy to load the content into a non-child element in JS, then append to html when I actually want to display it. I can't find how to re-setup the div, which has the correct Class and is set up the way it was when the comp was loaded the first time. (this option is bad because it seems all of the comps are still in memory, but at least it would work)
    Worst case, I have tried this and ALMOST got it to work. When I removed the old divs from the page, I cloned them and added them to an array of objects in JS. Then, when going back, I pushed them back out, so it was an EXACT copy of the original div, all content was the same, all id, class, style, etc was the same. Unfortunately, AdobeEdge seemed to lose the ability to talk to it. (AdobeEdge.getComposition('EDGE-MyUniqueIdentifier').getStage(); // returns undefined). How do I tell it to re-associate the div with the object that still exists here? And obviously, this is an awful solution, since it appears that all of the comps stay in memory, but I was desperate to simply make it work somehow.
    If you need code examples of any of my attempts at the previous options (I have tried them all extensively, and unsuccessfully), I can provide it. I hope someone has found a way to deal with this or has links to actually useful information. Most posts I have found simply tell you to load all your comps and hide/unhide them as needed. This works for small-scale, but when you have a lot of comps to move between, it becomes less of a viable solution.

    Exactly the same problem here. Could anyone from EDGE TEAM answer to this question?
    How we can UNLOAD and LOAD same or another animation using SAME or DIFFERENT dinamically created DIVS?
    Affter loading the second animation AdobeEdge.getComposition('EDGE-MyUniqueIdentifier').getStage() returns undefined.
    Thanks for ther support.

  • Create accessible pdf from html pages dynamically

    Hello,
    I am trying to create a 508 compliant Pdf from a simple HTML page using the HTML to Pdf feature in Livecycle ES4 server. I was able to configure the service to generate tabbed Pdf, but the created Pdf has multiple accessibility issues.
    Some of the issues encountered are:  incorrect tab order,  some links are not tabbable while others are,  link text is being read “Blank”,  some text is skipped while tabbing,  missing alt text for images,  page being rendered in the responsive(mobile) view,  etc.
    I have tried both the available approaches, of 1) providing a URL to create the PDF, and 2) to send the html document as a zipped file, with same results.
    Attached is a sample PDF generated from a simple HTML page I created for demo.
    My questions are:
    Is it possible to generate 508 compliant (accessible) Pdf documents using Html to Pdf service from Pdf Generator? If yes, which settings might I be missing?
    Is there any other service provided by Adobe Livecycle Server that can generate 508 compliant Pdf documents from a 508 compliant HTML page?
    Your help is much appreciated.
    Thanks,
    Anup

    I created a tool that does just that (only you will need to enter the page numbers as text, it does not work by selecting them):
    Acrobat -- Extract Non-Sequential Pages: http://try67.blogspot.com/2011/04/acrobat-extract-non-sequential-pages.html

  • Create accessible pdf from html page

    Hello,
    I am trying to create a 508 compliant Pdf from a simple HTML page using the HTML to Pdf feature in Livecycle ES4 server. I was able to configure the service to generate tabbed Pdf, but the created Pdf has multiple accessibility issues.
    Some of the issues encountered are:  incorrect tab order,  some links are not tabbable while others are,  link text is being read “Blank”,  some text is skipped while tabbing,  missing alt text for images,  page being rendered in the responsive(mobile) view,  etc.
    I have tried both the available approaches, of 1) providing a URL to create the PDF, and 2) to send the html document as a zipped file, with same results.
    Attached is a sample PDF generated from a simple HTML page I created for demo.
    My questions are:
    Is it possible to generate 508 compliant (accessible) Pdf documents using Html to Pdf service from Pdf Generator? If yes, which settings might I be missing?
    Is there any other service provided by Adobe Livecycle Server that can generate 508 compliant Pdf documents from a 508 compliant HTML page?
    Your help is much appreciated.
    Thanks,
    Anup

    I don't think it's possible to do this using a standard JavaScript script in Acrobat, since the newDoc function doesn't work with URLs.
    The only option I can think of is to use an external automator that will call the Create PDF From Web Page dialog, paste the address from a file, and after the PDF file is created will continue to the next line.
    I might be able to create such a tool for you. If you're interested, contact me by email (click my username for the address) or PM.

  • How to send html page in outlook wihtout gibberish

    i have html page that i tried to send in outlook with send web page by email
    the problem is it add the following thing before the html:
    ן»¿
    the questions are:
    from where does it come from? and how to fixed it so it does not show?

    thank you for the answer but saving it as ansi or unicode make things worse and in that encoding
    it is not possible to see the page
    the page is mainly photos some text and links
    is there any other
    possibility that cause this or it is only encoding of the page? 

Maybe you are looking for

  • How to add hard disc to time capsule

    Just got new Time Capsule which works well with my MacBook pro. it says a hard disc can be added by USB but when I plugged it in it is not recognised. Help please

  • Third party software to limit length

    I'm getting ready to help a guy, who owns a dance studio, buy a Mac and digitize his library. At his dances he likes to keep the songs under 3 minutes. Is there an easy to use program that works with iTunes that would stop playback after a set period

  • Need some details

    Hi,      1. Can I have more details for validating the existing of one physical file for the given file path & file name, in Jdeveloper (Jdev11.1.1.4)?      2. How about to have the "Browse" prompt like the following? http://www.4shared.com/photo/_iI

  • Importing photoshop metadata into dreamweaver

    is there a way, for example, for the description field in the photoshop metadata for an image be imported into dreamweaver with the image? thanks

  • Ability to both use Green Screen and then resize the clip?

    Imovie 10   -  the Green Screen works great.  Place one clip above the other and presto.  But then if you want to shrink it and put it say on the bottom corner, there in no way.  It looks like you can either use the Green Screen or the PIP but not bo