Changing pre loaded templates

Good afternoon,
I'm new to this so please bare with me.
If I use one of the pre loaded templates in Dreamweaver CS5 how do I amend the text colour etc in the "header" section?  What ever I type is always underlined and I can't seem to change this, even if I amend the "header" rule.
Also, in the same template there is a box to insert a logo.  When I insert a logo the text that I type is always underneath the logo, how do I change it so that the text appears in the top right hand corner?
Thanks
Luke

Thanks vey much, that website looks like it will be very useful.
I'm currently studying Dreamweaver at night college and some will say I'm trying to run before I can walk but to be honest I'm of the opinion that the more I try the more I will learn.
Thanks again for the help.

Similar Messages

  • Formatting Pre-Loaded Templates

    Is there any way to format a pre-loaded template? I am creating a document using the "Garden Project Poster Small" template, but I would like the background to be lighter than it automatically is. Is there a "Master" for each template that I can mess with?
    thanks in advance!
    ~mary

    Welcome to Apple Discussions
    You could "mess with the 'master'" but I don't recommend it. They are well buried in the application package. If you do want to play with them, do it on a copy. You can find them by Control- or right-clicking on the Pages application > Show package contents > Contents > Resources > Templates. I don't find anything that can be manipulated but you might.
    Why not just resave the template with your desired changes as a template? It will show up under My Templates in the template chooser.

  • Changing Pre Loaded Service Provided Software

    Have changed from Virgin to "3". Does anyone have any experience in loading "3" X series software into an unlocked N73 that has Virgin Preloaded settings and software. Or just changing pre loaded software?
    The Service centre says that the phone may stop working and the warranty will be invalid. Not woried re the warrany but I don't want to do it if it will stop the phone from working.

    Adding
    Thread.sleep(10000);after loading the data seems to solve the problem but this seems dirty, any better solutions?

  • Encore CS 6 missing pre loaded templates

    see where I'd have to download them and manage to put them in the right place
    just doesn't make sense to have spent all that dough and have to do this "heavy lifting."

    Adobe has not explained this rationale. I think that as they moved toward download delivery (which many users preferred anyway), they recognized that almost 2 gigs of extra content was a lot. Around the same time the content was also available in the "Resource Central" (which is no longer available).
    And for many users a few full templates was okay, but no buttons, extra backgrounds, etc.
    Unfortunately, with all the changes and then lack of support for Encore, it became a big mess and lacked adequate guidance.

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Pre-loading the cache

    I'm attempting to pre-load the cache with data and have implemented controllable caches as per this document (http://wiki.tangosol.com/display/COH35UG/Sample+CacheStores). My cache stores are configured as write-behind with a 2s delay:
    <cache-config>
         <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>PARTY_CACHE</cache-name>
              <scheme-name>party_cache</scheme-name>
         </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <distributed-scheme>
                <scheme-name>party_cache</scheme-name>
                <service-name>partyCacheService</service-name>
                <thread-count>5</thread-count>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                         <write-delay>2s</write-delay>
                        <internal-cache-scheme>
                            <local-scheme/>
                        </internal-cache-scheme>
                        <cachestore-scheme>
                            <class-scheme>
                                <class-name>spring-bean:partyCacheStore</class-name>
                            </class-scheme>
                        </cachestore-scheme>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
         </caching-schemes>
    </cache-config>
    public static void enable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.TRUE);
    public static void disable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.FALSE);
    public static boolean isEnabled(String storeName) {
            return ((Boolean) CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).get(storeName)).booleanValue();
    public void store(Object key, Object value) {
            if (isEnabled(getStoreName())) {
                throw new UnsupportedOperationException("Store method not currently supported");
        }The problem I have is that what seems to be happening is:
    1) bulk loading process calls disable() on the cache store
    2) cache is loaded with data
    3) bulk loading process calls enable() on the cache store ready for normal operation
    4) the service thread starts to attempt to store the data as the check to see if the store is enabled returns true because we set it to true in step 3
    so is there a way of temporarily disabling the write-delay or changing it programatically so step 4 doesn't happen?

    Adding
    Thread.sleep(10000);after loading the data seems to solve the problem but this seems dirty, any better solutions?

  • Oracle Pre-Load Actions error

    HI All,
    I am trying to install SAP R/3 4.7 IDES, Oracle 9 with patch 9.2.0.3.0 on 2003 Server SP2. I have encountered the following error during Database Instance installation step of Oracle Pre-load Actions .
    CJS-00084 SQL Statement or Script failed. Error Message: Executable C:\oracle\ora92/bin/sqlplus.exe returns 3.
    ERROR 2007-06-13 10:25:46
    FJS-00012 Error when executing script.
    I have already tried following solutions.
    1) directory C:\oracle\ora92/bin/sqlplus.exe is there I able to connect as sysdba
    2) I changed “C:\SAPinst_Oracle_KERNEL" name with no spaces
    3) Installed Oracle without installing Database.
    I think I am making a mistake in following two steps.
    1) I did installed Ms Loop Back adapter but what exact entry I need to put in Host file?
    2) When I installed Oracle and run “Sapserver.cmd” file, windows message popup and ask which program I want to use to open “Sapserver.rsp” file and I choose notepad.
    After Oracle installed, I run some Sqlplus commands and able to connect as sysdba but listener status command showing one listener is running and other status unknow. After installation stop, Listener status giving TNS adapter error.
    Please help.

    >> have already tried following solutions.
    >>1) directory C:\oracle\ora92/bin/sqlplus.exe is there I able to connect as sysdba
    >>2) I changed “C:\SAPinst_Oracle_KERNEL" name with no spaces
    >>3) Installed Oracle without installing Database.
    never change the directory name of SAPinst working directory. It is designed in this version to use a directory with spaces and it will work with it.
    What you should not do is to copy additional CDs (like Export, Kernel) into directories having blanks in the path.
    >>1) I did installed Ms Loop Back adapter but what exact entry I need to put in Host file?
    >>2) When I installed Oracle and run “Sapserver.cmd” file, windows message popup and ask which program I want to use to open “Sapserver.rsp” file and I choose notepad.
    That should not happen at all.
    The response file is opened by Oracle's setup.exe which is invoked by sapserver.cmd. Is the Oracle CD located on a directory containing blanks?
    Peter

  • Module Pre-Load runs slow after reboot

    Rebooted testing controller (first time in quite awhile) and now the Module Pre-Load runs significantly slower than it used to.  Now takes several minutes when it only took a few seconds before.  Affects all sequences run on this station, not just tied to one sequence.  Does not appear to be any other applications running that would hog memory.  Tried rebooting again with no change.  Have not made any changes to Station Options (that we are aware of).  Any ideas?  Running Test Stand 2013 and Labview 2013.
    Thanks.
    GSinMN 

    There could be several reasons. I understand that the affected code modules are written in LabVIEW.
    Options:
    a) The modules are or contain VIs which are not in the correct version (LV 2013) and have to be recompiled. In order for this option, the LV module adapter in TestStand has to be configured to run VIs in "Development Environment". In "Runtime Engine", the VIs wouldn't work at all.
    b) Your harddrive is close to full and is heavily fragmented. If you have a HDD (so no SSD!), you should use the MS Defrag tool to defragment the harddrive.
    c) You have changed settings in Anti-Virus-Software or maybe Firewall. This changes don't need to be done by yourself manually... there have been incidences where settings were modified by e.g. hotfixes of software. This could lead to some kind of access restriction which resolves after hitting a timeout.
    There are possibly more reasons, but these three are the most prominent i can think off.
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Vote on (option to pre-load JVM)

    Greetings.
    I wish to draw your attention to an RFE (request for enhancement) that could use your vote.
    The RFE is asking for the jre to have the option of preloading the JVM at startup. Once loaded, the new 'tray icon" service would be responsible for keeping the rt.jar library loaded (which basically means waking up every once in a while and executing something inside of rt.jar).
    This would be an option that is turned OFF by DEFAULT. The user would have to go into the java control panel to turn it on.
    Why do this you ask? Well I did some extensive testing on cold start times between 1.4, 1.5 and 1.6. See the link at the bottom. The upshot was, that a huge amount of time is spent loading the jvm and rt.jar. All of the lazy loading technologies really didn't help. So, for those willing to have YATI (yet another tray icon), please give the option of preloading the jvm. You see it with other products, like QuickTime...
    Please vote and leave your feeback at:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6346341
    The orginal thread that started it all (lots of benchmark data):
    http://forum.java.sun.com/thread.jspa?messageID=3962327

    You have good points. Perhaps it is my ignorance of windows internals - but let me take a stab at this and perhaps clear a couple of things up (for me at least).
    1) I think you are correct that the JRE footprint size is an issue especially when you compare it to something like quicktime. In defense of the RFE, it is precisely for that reason that I recommend that the option be off by default.
    2 and 3) The tray icon isn't the single JVM of the system. It is an instance of the JVM that is providing a service. Let me take a stab at how I would implement it.
    The java tray icon becomes a loading service that pre-caches the jar files in memory and cycles through them (to keep them in memory). When a local JVM instance is invoked, be it a browser or local application that instance of the jvm would query for the existance of the java tray service and if present, use it to load rt.jar etc. from its memory cache.
    Hope that makes sense.
    -Dennis
    Personally I think the idea of pre-loading the jvm
    like a tray icon would be a bad thing overall. I see
    your point however:
    1. First + Foremost java takes up too much memory.
    Just running simple hello world apps now take tens of
    megabytes to run simply because of the (bloated) size
    of the JRE. It's not like quicktime which can load a
    stub in < 1 Mb this is a bear
    2. What happens when someone calls System.exit()? I'm
    guessing your answer is "well the JVM restarts" but
    what about all the other apps running on that JVM? It
    would kill them too. Beyond that one java app(let)
    could start interfering with another java app taking
    memory making environment changes etc.
    3. Possibly my biggest best reason is that almost all
    real commercial java applications need more memory
    than the default allocated by just calling 'java ...'
    how would we know how much to allocate when starting
    the jvm? Or we could allocate at runtime and fragment
    memory all to ****. Anyway you get the idea

  • Pre-Loading Timecode on a Tape

    I'm sure this has been answered here. However, I did a search and could not find the answer. (I may not be thorough, but at least I'm honest)
    What is the best process for "pre-loading" a tape with timecode? I want to eliminate the situation where I get a break in timecode on the tape.
    Is it as simple as just putting a tape in the camera and hitting record until the tape runs out? Or is there a better way?

    Kevin, JohnAlans suggestion is a common practice, particularly with analog tape for 'insert' editing purposes.
    I've read though, that this may not be optimal for 'shooting' DV tape, as it can cause strange behavior, were there to be any..camera induced timecode breaks.
    Best practice for this comes down the the shooter.
    - Get in the habit of always rolling 5 sec. of tape before action starts and 5 seconds of tape after action ends.
    - Try not to turn the camera off.
    - Try not to do 'searches' on tapes that are being recorded on.
    - If the camera is turned off, or you need to search back into the tape, make sure you do NOT roll past the end of the last recorded frame...preferable stop a couple of seconds prior to the last recorded frame, and begin recording there.
    - If your camera is capable of generating Bars, try to record 10 seconds of bars in between scene or location changes.
    Hope that helps,...I was understanding your question to deal with 'timecode breaks' that affect captures. If you're concerned about 'blacking' tapes for editing, that's a different deal.
    Kevan
    EDIT - I knew I missed something...if you have to pop the tape out of the camera at any point, follow the above listed practice of cueing up to a couple of seconds prior to the last recorded frame, and start recording there to pick up continous timecode.
    Really, really try not to (DON'T) use 'free-run' timecode. It will make editing an absolute nightmare.

  • Change the Page Template

    I can change the Page Template, for example, "One Level Tabs" to "Popup" at runtime?

    (sorry to budge in!)
    The question still remains: why would you want to do this? Or maybe, why not either make the page tab-based OR pop-up? My worry would be that your user opens the popup page, then navigates back to wherever the 'tabbed' version is and attempts to load that - what would happen then?
    If there is a solid design reason to do this, you could probably achieve it by making two copies of the same page - one being the 'tab' version and one the 'popup' (although further changes to the page would require double the work).

  • Pre-designed templates?

    I am new to Dreamweaver.  I have designed sites in iWeb, but it won't give me access to the HTML in order to change colors on a pre-designed template.  I downloaded a trial version of Dreamweaver, but I can't figure out if it has any pre-designed templates for me to start with or not.  If it does, would someone please tell me how to access them?  Thank you!  (I called support, but they said that they do not support the trial version.)

    I'm looking for a program that will give me more versatility but won't require me to write all of the code from scratch.
    DW writes the code for you.
    DW Template #6
    http://www.adobe.com/devnet-archive/dreamweaver/articles/dreamweaver_custom_templates_pt2/ template6/Publish/theme_06_design_from_template.html
    DW Template #7
    http://www.adobe.com/devnet-archive/dreamweaver/articles/dreamweaver_custom_templates_pt2/ template7/Publish/theme_07_design_h.html
    Working with Custom Templates - (watch the video)
    http://www.adobe.com/devnet/dreamweaver/articles/dreamweaver_custom_templates_pt2.html
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb

  • Partial Pre-Load in Flash?

    I have several .flvs that load into one of 7 .swfs. I have a
    pre-loader for all 7 of my .swf files up front, however, NONE of my
    .swfs will load until ALL the .flvs are fully loaded and ready to
    play, which takes a very long time AND the user may not necessarily
    need to download all or any of the .flvs. Does anyone have a way to
    load all the rest, and partially load the one containing the .flvs?
    or load the .flvs on demand?
    Please see:
    Hyde Park Test
    for what I mean. The loader "hangs" on 7.swf until all the .flvs
    (in 5.swf) are ready to play. I'd like the entire thing to load
    when the .flvs are only partially loaded - they can continue to
    load while the user surfs the page...

    calendar.swf is compiled to run in the local-with-networking sandbox, while get-calendar.swf runs in local-with-filesystem. Loading one into the other causes a security error. Simplest solution is to change get-calendar.fla to local-with-networking:
    Publish Settings (Flash tab). Select Access network only in Local playback security.

  • SWF stalls, and pre loader

    Hi all,
    I am creating a flash movie that has 6 different Movie Clips
    placed on the main time line in addition to a pre-loader placed on
    that same time line.
    Everything runs fine until the 6th MC where at a specific
    spot; right after a short animation the movie stalls. I am assuming
    it is to load other thing, but isn’t that why there is the
    preloader?
    I put that MC on the first frame of the time line, and
    covered it with white (old school, I know), so it loads first, yet,
    it still stalls, any ideas.
    This is on MX2004 on a Mac (latest os)
    Thanks
    A

    Hello StanWelks,
    > Using CS3, I am creating a website in flash by creating
    a motion tween
    > that will become my intro from frames 1-30, frame 30 has
    a stop action
    > and will display the home page graphics with buttons,
    frame 35 will be
    > my about page, frame 40 my contact page, etc.
    >
    > 1. Is this the usual way for doing this? Any issues with
    doing it
    > this way?
    >
    > 2. If I want to add a pre-loader at the beginning, would
    I just move
    > everything to the right in the Timeline, and make my
    pre-loader span
    > enough frames necessary to accomplish the animation and
    have the intro
    > appear right after?
    >
    > Thanks!!!
    It would be much easier if you plan you flash movie without
    placing all content
    into main timeline. Split all your content into separate
    movie clips and
    then place them at the main (or other) timeline. Using this
    approach you'll
    gain an opportunity to change movies order at any time
    without much pain.
    Also, it'll be easier to add more content any place into you
    movie.

  • IPad:  Any advice on getting a pre-loaded Sim for use overseas (Europe)?

    iPad:  Any advice on getting a pre-loaded Sim for use overseas (Europe)?

    Thanks, I thought this was the case, checking to see if anything had changed.  Spain is the country in question.  We previously were not able to obtain a Sim there but that may have been because of a carrier or individual employee problem.  Will certainly try again next time (we did get a carrier Sim successfully in Italy one time).  Cheers

Maybe you are looking for