Encore CS 6 missing pre loaded templates

see where I'd have to download them and manage to put them in the right place
just doesn't make sense to have spent all that dough and have to do this "heavy lifting."

Adobe has not explained this rationale. I think that as they moved toward download delivery (which many users preferred anyway), they recognized that almost 2 gigs of extra content was a lot. Around the same time the content was also available in the "Resource Central" (which is no longer available).
And for many users a few full templates was okay, but no buttons, extra backgrounds, etc.
Unfortunately, with all the changes and then lack of support for Encore, it became a big mess and lacked adequate guidance.

Similar Messages

  • Changing pre loaded templates

    Good afternoon,
    I'm new to this so please bare with me.
    If I use one of the pre loaded templates in Dreamweaver CS5 how do I amend the text colour etc in the "header" section?  What ever I type is always underlined and I can't seem to change this, even if I amend the "header" rule.
    Also, in the same template there is a box to insert a logo.  When I insert a logo the text that I type is always underneath the logo, how do I change it so that the text appears in the top right hand corner?
    Thanks
    Luke

    Thanks vey much, that website looks like it will be very useful.
    I'm currently studying Dreamweaver at night college and some will say I'm trying to run before I can walk but to be honest I'm of the opinion that the more I try the more I will learn.
    Thanks again for the help.

  • Formatting Pre-Loaded Templates

    Is there any way to format a pre-loaded template? I am creating a document using the "Garden Project Poster Small" template, but I would like the background to be lighter than it automatically is. Is there a "Master" for each template that I can mess with?
    thanks in advance!
    ~mary

    Welcome to Apple Discussions
    You could "mess with the 'master'" but I don't recommend it. They are well buried in the application package. If you do want to play with them, do it on a copy. You can find them by Control- or right-clicking on the Pages application > Show package contents > Contents > Resources > Templates. I don't find anything that can be manipulated but you might.
    Why not just resave the template with your desired changes as a template? It will show up under My Templates in the template chooser.

  • Missing .pdf file when loading template

    Hey everybody,
    I just switched from a PC to a Mac and found Pages pretty cool. I'm running the trial and made a few templates based on the business letter template that was included with the pre-installed Pages 2 on my MacBook. When opening my template I end up with a window showing me an error message that the file "businesscircle_logosgray.pdf" is missing. My template then opens up correctly but the error message is showing up every time I open it. When I copy its contents into an empty document and save it to another template it works 2 - 3 times but then I get the same error. The business letter template opens perfectly without any errors.
    Is there any way to solve that problem?
    Thank you in advance
    Dominik
    MacBook 13" (black)   Mac OS X (10.4.8)  

    That sounds strange. businesscircle_logosgray.pdf is the spiral image above the word "Company" in the template. I assume you deleted it, when you made your own templates. It is supposed to be a place holder to replace with another image.
    I would probably try recreating your new template and keep the image just in case. You could also try creating a new blank template and paste everything from the business template except that image. Hope you manage to sort it out.

  • Dual boot Win7 pre-loaded and XP

    I have a T510 that came pre-loaded with Win7 Professional.  I'd like to dual boot with XP that I have loaded on a different hard drive on the same T510.
    I do have the recovery disks.  I can't do a fresh install of XP because it have over 300 programs installed.  Is there some way to run the recovery disks with Win7 Pro to not format the XP partition.  Can I make the XP partition read only and then run the recovery disks?
    Maybe someone knows of a way to install the recovery disks, then ( I know how to move Win7 and make a partition before Win7 then restore XP to the hard drive) then repair Win7 so it has the correct drive letters.
    Thanks,
    Docfxit

    Docfxit wrote:
    That's a good idea.  I might end up doing this.  The problem I have is I can't get all the Lenovo setup/programs that come with the restore disk to load.  I know I can get some of them loaded with System Update.
    What setup/programs will you be missing?  I run a vanilla WIn 7 Pro 64 on my T400 and  used TVSU to pull down everything that I need.
    It may be that some multimedia stuff will be missing, but grab the installers for them from your Lenovo Win 7 setup before wiping it or removing that drive.  Probably in C:\SWTOOLS (at least they were there in the old days. I don't have a Lenovo '7 load to examine.)
    I'd probably do something like this, since I can't resist messing with a multiboot machine:  clone the XP install to a large HD - or use the one you have if it 's big enough.  Shrink the XP C: partition with your favorite tool.  Fire up a MS '7 install DVD and install into the free space.  I'm told you can use the Win 7 activation code that came w/your machine, but you may have to call MS.
    Install TVSU for Win 7 and let it pull down the rest.  Come to think of it, that's what I did with my XP T61.
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • Pre-Loading Timecode on a Tape

    I'm sure this has been answered here. However, I did a search and could not find the answer. (I may not be thorough, but at least I'm honest)
    What is the best process for "pre-loading" a tape with timecode? I want to eliminate the situation where I get a break in timecode on the tape.
    Is it as simple as just putting a tape in the camera and hitting record until the tape runs out? Or is there a better way?

    Kevin, JohnAlans suggestion is a common practice, particularly with analog tape for 'insert' editing purposes.
    I've read though, that this may not be optimal for 'shooting' DV tape, as it can cause strange behavior, were there to be any..camera induced timecode breaks.
    Best practice for this comes down the the shooter.
    - Get in the habit of always rolling 5 sec. of tape before action starts and 5 seconds of tape after action ends.
    - Try not to turn the camera off.
    - Try not to do 'searches' on tapes that are being recorded on.
    - If the camera is turned off, or you need to search back into the tape, make sure you do NOT roll past the end of the last recorded frame...preferable stop a couple of seconds prior to the last recorded frame, and begin recording there.
    - If your camera is capable of generating Bars, try to record 10 seconds of bars in between scene or location changes.
    Hope that helps,...I was understanding your question to deal with 'timecode breaks' that affect captures. If you're concerned about 'blacking' tapes for editing, that's a different deal.
    Kevan
    EDIT - I knew I missed something...if you have to pop the tape out of the camera at any point, follow the above listed practice of cueing up to a couple of seconds prior to the last recorded frame, and start recording there to pick up continous timecode.
    Really, really try not to (DON'T) use 'free-run' timecode. It will make editing an absolute nightmare.

  • How to pre-load Coherence Caches used within an OEP Application

    Hi OEP/Coherence guys,
    I'm currently developing an OEP application that was consuming database inputs in CQL queries.
    I've replaced database direct access by Coherence caches access. My Coherence Local caches use a cache loader to fetch rows (by key) when there is a cache miss. This is working well, and the caches get filled in during the execution of my OEP application.
    The problem is that if CQL queries are made on some attributes (not the key) of not-yet-cached data, the load method of my cache loader is not invoked and there is no result to my CQL query.
    I'm wondering how to pre-load my data in Coherence Caches, from the database, when the OEP application starts to avoid such kind of problems...
    Thx for any advice.
    Renato

    Hi.
    Could you please describe the way to "set-up a cache-loader to load data into your cache when the OEP application starts" ?
    I have a cache-loader configured with my cache. My cache-loader implements the "com.tangosol.net.cache.CacheLoader" interface.
    This interface only defines 2 methods:
    load(java.lang.Object oKey) ==> Return the value associated with the specified key, or null if the key does not have an associated value in the underlying store.
    loadAll(java.util.Collection colKeys) ==> Return the values associated with each the specified keys in the passed collection.
    None of these methods allows me to pre-load my data (and BTW it looks like "loadAll" is never called by OEP)
    Thx
    RP

  • Simple pre-loader not working

    I have made a simple pre-loader to leave the first frame and
    go to the second frame one the bytesTotal has been reached. In the
    loader i have a simple rectagle (instance named 'bar_mc') that i
    want to grow as the bytesTotal is reached. Currently the screen is
    blank when the file is loading nad then you see a flash of the full
    rectangle just before it plays frame 2.
    Here is the code i am using which is all on frame 1. Any help
    is apprecialted:
    stop();
    root.loaderInfo.addEventListener(ProgressEvent.PROGRESS,
    progressHandler);
    function progressHandler(myevent:ProgressEvent):void {
    var myprogress:Number =
    myevent.bytesLoaded/myevent.bytesTotal;
    bar_mc.scaleX = myprogress;
    root.loaderInfo.addEventListener(Event.COMPLETE, finished);
    function finished(myevent:Event):void {
    play();
    }

    There is code missing from that, or it is defined somewhere
    else. I suggest visiting gotoandlearn.com and going thru the
    preloader tutorial there, it will define all of the elements you
    need.

  • Leopard OS 10.5 - pre-loaded vs. Erase and Install

    Just received a new iMac I ordered from from Apple Online yesterday. I was told it would come pre-loaded with the new Leopard OS but it didn't. The installed OS was 10.4.01. Apple has now agreed to either send me the Leopard OS or to take back the iMac and later on send me another one with Leopard pre-loaded. I'm wondering if this is really necessary as the Leopard OS Erase and Install option and/or a Secure Erase would appear to give me essentially the same result. Keep in mind my background is with Windows OS where I would always do a low level format of the hard drive and then do a clean install of a newer version of Windows OS. I would appreciate any suggestions about what I should do.

    Malcom:
    No, not initially. I saw the SCSI warning, but I must have missed that- I did the firware revision yesterday, after the fact- but I pulled the Jive drive box first and haven't had time to reassemble. I'll reinstall it tomorrow. Then I'll work on the SCSI card Adaptec 39160- (My Medea external Raid is a no show period.)
    Thanks for being there!
    Jim

  • Key figure definition in BEx gives "Error loading template 0QUERY_TEMPLATE"

    Hi,
    On a BI 7.0 system I try to run a query in the BEx Analyzer and after I have the result in Excel. I wants to see the key figure definition by right clicking in a data cell.
    Then I get this error message after the browser has started up.
    Error loading template 0QUERY_TEMPLATE
    Notification Number BRAIN 276  
    Any ideas how this can be?
    Kind regards, Bjarne

    hi,
    Please check the below path in BI and set the template "0ANALYSIS_PATTERN" there:
    SPRO ->SAP Netweaver -> BI ->Settings for reporting and analysis ->BEX Web -> Set Standard Web Templates -> Ad-hoc analysis -> set the above template and save it.
    check wether it works afterwords or not.
    This is the standard web template called every time when we execute our query's.
    Thanks
    Dipika

  • Web UI Client - Error Loading template 0TPLB_MKTG_C01_Q0001_V01

    Hi
    Please can any one help on this below error
    Error Loading template 0TPLB_MKTG_C01_Q0001_V01
    I have activated the Template in BI Content and able to check the same in WAD and also in RS Template Maintain in BI system.
    But as soon as i go to CRM Web UI portal and click on that link its saying Error Loading template 0TPLB_MKTG_C01_Q0001_V01.
    I have also given the settings in CRM SPRO --> Display SAP BI Netweaver Reports --> Given the Web Template id, but still iam not able to get the template worked..
    Could any one of you suggest.
    Thanks in advance

    Hi Amit,
    Can u pls help us in knowing solution if u have fixed this issue.
    Thanks,
    Ravi

  • Can I install Snow Leopard on the latest Macbook Pros? (the ones pre-loaded with Lion)

    The problem and solution is pretty simple, I just want to know if anyone has tried this before. I have a brand new Macbook Pro that I bought more or less for the sole purpose of having a more powerful machine to run AVID Media Composer on. AVID is only compatible up to OS version 10.6.7 at the moment, and the machine I got was pre-loaded with Lion, 10.7....So I look at the support documentation Apple provides, and notice that in the nifty little chart they have, the latest line of Macbook Pros out there (early 2011) originally had version 10.6.6, so I'm assuming that's the previous version I can't downgrade past.
    To revert back to Snow Leopard, however, I need to install it from the DVD which has version 10.6.3 on it, and then upgrade to any version between Snow Leopard 10.6.6 and Lion 10.7. In theory this could work, the only problem being that for a brief time between installing Snow Leopard and updating to the version of it that I need, the computer will have version 10.6.3 on it.
    Now I'm pretty sure if the only thing I do on the computer is immediately update to a safe-to-use version, there will be no problems. However, if the machine's hardware is so terribly non-backwards compatible with the Snow Leopard OS, I may do all this backing up and reverting and not even be able to start the computer once I get the old install on it. Before I just go ahead and try this for myself, I was wondering if anyone else has, and more importantly, have you had any success?

    Hi r,
    EDIT: disregard my post. Waiting for that disc is a far better option.
    I hope w won't mind if I add a thought here:
    rmo348 wrote:
    Now I don't mind if some drivers are messed up and resolution is all funky when I install 10.6.3 on it, I just need to know if it'll be functional to the point where I can run the 10.6.6 update dmg. Once I update to 10.6.6 everything should work fine.
    Or could I install 10.6.3, have the 10.6.6 update burned to a disc, and boot straight from that? This is my first Mac so I'm not sure what little tricks work or not.
    Your first idea may work; the only way to know for sure is to try it. If you do, make sure you download and run the Combo update for 10.6.7 or 10.6.8. There can be different versions of a point update, those which are available for download, and those which ship on Macs, so you want to install one beyond that which shipped with some of the new MBPs.
    If the MBP won't boot to 10.6.3, something else to try is installing it to an external HD, then installing the 10.6.7 update on it, clone it to the MPB's internal HD, and run the 10.6.7 or 10.6.8 Combo update on it.

  • Can I set up an apple ID for my child that can only buy apps and music with pre-loaded funds

    I want to set up a an apple ID for my son, but only allow him to buy things if there is funds pre-loaded in his account.  Is this possible?

    Sure. You can either set up an account using an iTunes gift card or gift certificate, or set up a monthly allowance. See:
    http://support.apple.com/kb/HT2736
    Regards.

  • "Easy" way to add a text string Flex default pre-loader?

    Is there an easy way to add text (company name for example) to the top of the default Flex pre-loader?
    Thanks.

    Sure,
    http://www.flexer.info/2008/02/07/very-first-flex-preloader-customization/
    Johnny
    Please rate answers.

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

Maybe you are looking for