Advice on best practices on book proof stages

I'm wondering how others go about book projects when you move from one proof to the next, in the context of using InDesign's book panel to manage your project.
I create a first proof in InDesign, send PDFs to contributors, and they approve or give me changes.
This brings me to the second proof. Those PDFs are sent to an external proofreader.
When the proofreader's comments come back, I incorporate them into the final proof, which goes to indexers.
Because of the way things can get mucked up, I need to keep the first, second and final proofs separate, so that I can go back to figure out where things went wrong, if they did go wrong. So there is an individual directory for each stage; furthermore, so that it is easy to tell what I'm looking at, the file name of each chapter reflects the stage "1_StupidBook_proof1.indd" and so on.
So what do people usually do in InDesign, in terms of the .indb file. Do you just rename all the files to "xxx_proof2" and edit the .indb file, removing the old pages and adding the new? Seems cumbersome. Rename the files and create a NEW .indb file?
Just curious about how others do it. Appreciate any input.

I have variables in the master page footers, one is the file name and one is the date (last changed, I think?).And one serious drawback with this method, obviously, is that I have to remove it from each chapter in the final proof.
Ah -- that wasn't obvious to me. You can have any of 3 dates as text variable, the creation date, the modificaiton date, or the output date.
One approach would be to use a custom text variable that you could syncrhronize across your book. That way you could specify "PROOF1" and then remove or change the variable across all chapters in a single operation.
So your suggestion of the page information option under crop marks is a much better idea.
I am not sure it is the way to go, though. If you print on oversize pages (e.g., if your book size is smaller than letter and you print on letter, or smaller than tabloid and you print on tabloid), it is easy enough to print at 100% and the Page Info will appear on the paper outside the area of your book.
If you don't do that (most of us don't have that luxury), then you can consider moving the page marks inside the live region of the page. Doing this requires exploiting an undocumented feature of InDesign, custom crop marks, which requires hand-editing a custom file... See http://forums.adobe.com/message/3637984#3637984, etc. for directions. Some might consider this too much of a pain...

Similar Messages

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • Advice on Best practice for inter-countries Active Directory

    We want to merge three active directories with on as parent in Dubai, then child in Dubai, Bahrain and Kuwait. The time zones are different and sites are connected using VPN/leased line. With my studies i have explored two options. One way is to have parent
    domain/forest in Dubai and Child domain in respective countries/offices; second way is to have parent and all child domains in Dubai Data center as it is bigger, while respective countries have DCs connected to their respective child domains in Dubai. (Personally
    i find it safer in second option)
    Kindly advise which approach comes under best practice.
    Thanks in advance.

    Hi Richard
    Mueller,
    You perfectly got my point. We have three difference forests/domain in three different countries. I asked this question becuase I am worried for problems in replications. 
    And yes there are political reasons due to which we want to have multiple domains under one single forest. I have these following points:
    1. With multiple domains you introduce complications with trusts 
    (Yes we will face complications that is why  I will have a VM where there will be three child domains for 3 countries in HQ sitting right next to my main AD server which have forest/domain -  which i hope will help in fixing replication problems)
    2. and
    accessing resources in remote domains. (To address this issue i will implement two additional DCs in respective countries to make the resources available, these RODCs will be pointed toward their respective main domains in HQ)
    As an example:- 
    HQ data center=============
    Company.com (forest/domain)
    3 child domain to company.com
    example uae.company.com
    =======================
    UAE regional office=====================
    2 RODCs pointed towards uae.company.com in HQ
    ==================================
    Please tell me if i make sense here.

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • BFILE: need advice for best practice

    Hi,
    I'm planning to implement a document management system. These are my requirements:
    (0) Oracle 11gR2 on Windows 2008 server box
    (1) Document can be of type Word, Excel, PDF or plain text file
    (2) Document will get stored in DB as BFILE in a table
    (3) Documents will get stored in a directory structure: action/year/month, i.e. there will be many DB directory objects
    (4) User has read only access to files on DB server that result from BFILE
    (5) User must check out/check in document for updating content
    So my first problem is how to "upload" a user's file into the DB. My idea is:
    - there is a "transfer" directory where the user has read/write access
    - the client program copies the user's file into the transfer directory
    - the client program calls a PL/SQL-procedure to create a new entry in the BFILE table
    - this procedure will run with augmented rights
    - procedure may need to create a new DB directory (depending on action, year and/or month)
    - procedure must copy the file from transfer directory into correct directory (UTL_FILE?)
    - procedure must create new row in BFILE table
    Is this a practicable way? Is there anything that I could do better?
    Thanks in adavance for any hints,
    Stefan
    Edited by: Stefan Misch on 06.05.2012 18:42

    Stefan Misch wrote:
    yes, from a DBA point of view...Not really just from a DBA point of view. If you're a developer and you choose BFILE, and you don't have those BFILE's on the file system being backed up and they subsequently go "missing" i would say you (the developer) are at fault for not understanding the infrastructure you are working within.
    Stefan Misch wrote:
    But what about the posibility for the users to browse their files?. This would mean I had to duplicate the files: one copy that goes into the DB and is stored as BLOB and can be used to search. Another copy will get stored on the file system just to enable the user to browse their files (i.e. what files where created for action "offers" in february 2012. The filenames contain customer id and name as well as user id). In most cases there will be less that 100 files in any of those directories.
    This is why I thought a BFILE might be the best alternative as I get both: fast index search and browsing capability for users that are used to use windows explorer...Sounds like it would be simple enough to add some metadata about the files in a table. So a bunch of columns providing things like "action", "Date", "customer id", etc.... along with the document stored in a BLOB column.
    As for the users browsing the files, you'd need to build an application to interface with the database ... but i don't see how you're going to get away from building an application to interface with the database for this in any event.
    I personally wouldn't be a fan of providing users any sort of access to a production servers file system, but that could just be me.

  • Best practice for Book recording vocals...

    My wife has written a children's book that we want to record and offer as a downloadable MP3 set. She will do the recording and we have the mic/studio GarageBand all working just fine. My question is what is the best way to make sure that all the tracks for each chapter are equalized the same?
    I know that we are trying to record it all the same, but I just wanted to see if there was anyway with GB or some other app to take all the seperate vocal tracks and make them all the same in volume, other than looking at the settings in GB which we don't change as far as input goes...
    Hope that is clear and makes sense... I just would like to know what I should do to make it as good as it can be with what we have, then stop worrying about it.
    Cheers,
    Cory

    Hmmm, thanks AppleGuy, that is interesting. I knew that I could create my own presets, but didn't think about this in this way...
    So, if I took the first chapter that we did, adjusted it to my liking, ie. reverb to just fill it out a bit, then saved it as a preset, I could then apply this preset to all my other recordings so that they would all have the same properties? I know this wouldn't actually fix the possible increase or decrease of volume from different recording sessions, but it would allow me to make sure they all have the same effects settings, yes?
    How do you do this to tracks that are already recorded?
    Thanks,

  • Best Practical JSP Book for PHP developer

    Hi Friends,
    I am new to JSP and want learn basics of JSP so that I can do simple web page works.
    I am a PHP developer with one and half years of experience.
    Can anybody tell me what is the best JSP book for beginners or any web link with lots of simple jsp example?
    --Sujoy                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Before you learn JSP, you need to learn Java. I heard 'Head First in Java' is a good book. Install Eclipse IDE (its free) and work through some of the book's examples in it (as you read the book from cover to cover). Then Install Tomcat (its free) and the tomcat plugin for Eclipse (its free). Then read a book on JSP (cover to cover). I like the book 'JSP in action'. It covers both JSP and servlets. Then, create a hello world JSP in Eclipse, then a hello world servlet in Eclipse.

  • Best practice for optimising animation / stage bitmaps

    Hi everyone and a happy new year.
    I am currently working on a prototype app for the iPad which involves some well known (UK) characters from childrens TV.
    My usual job is to build interactive online magazines which are essentially very simple Flash games, and I mean really simple. The artwork is provided to me as hi res jpeg images and I then convert them to 72dpi and add them to the stage as per the design given to me.There is generally between 3-10 user interactions that trigger an animation. I use Tweener for the animations and they are either a popup that slides on and or off the stage.or a character is animated around the stage a bit.
    This obviously works fine in a browser but on the iPad it really struggles with the animation, the framerate drops by around 75% and sometimes freezes for several seconds. I knew not to expect high performance animation sequences and high end game play using the packager for iPhone, but I did expect simple animations not to be a problem.
    Does anyone have any suggestions on optimisation? Is it perhaps the use of Tweener that is the problem? I am really hoping to get some good results here.
    Thanks for any help.

    The secret to success when using stream audio to sync timeline animation is to have the items load ahead of when they are needed. For example, if several new items are to appear at frame 100, and then immediately animate in some way, the audio will cause the frames where they animate to be skipped, because it takes so long for the items to load and get onto the GPU.
    So, have those items appear on the stage at perhaps frame 80, and concealed in some way so that you don't notice them. They have to overlap the stage by at least a pixel, otherwise they won't really load onto the GPU. Then at frame 100 move them into their correct positions, and the animation should go as intended.
    When doing this, make sure the items are remaining in the same layer, have the same instance name, and there are no gaps in the timeline.

  • Web console - delegating permissions correctly - Advice on best practice

    Hi,
    I'm in the process of rolling out the Orchestrator Console for wider use within our department.  After reading some posts on Console delegation I have been able to set a group up which when added to the Orchestrator root and Sub dirs, allows the folder
    views to be controlled, but only to some degree/very basic.
    What I mean - or what I'm finding is that I have what is turning out to be quite a 'deep' tree/subtree folder structure for my runbooks (currently about 4 levels) eg from root folder Runbooks->ProductionRunbooks->ServiceDeskRunbooks->ExchangeRunbooks
    - containing 2 exchange runbooks.
    So for this structure to delegate to Service desk staff I have created a security group (service desk_console) and given the basic Read permission at Runbook and ProductionRunbooks folder and then Full control (inc child Objects) at ServicedeskRunbooks folder
    to allow exectution of any runbooks below this level.
    My query is is this the way it should work - I initially thought I could set the read permission at the top level but then just at the Full control permission on the specific low level folder but this didnt work - I had to apply the read permission at each
    of the folders between root at target folder.
    So as the number of runbooks/folders grows and the possible mix of user groups who will require access to run a particular runbook I can see the delegation of permissions becoming very messy using the method I currently got working - ie with potentially
    several 'user groups' I will have to basically set explicitly the permissions for each user group at all levels on all folders?
    A possible solution I'm thinking of is to create a 'general console users' group and add the specific user groups to that (eg service desk,Exchange Team,VDI Team) to then set the read permissions on root Runbooks and Production Runbooks folders and then
    set Full control specific for the user groups on the folders containing the runbooks pertaining to that user group - any runbooks required by multiple groups could be set in a 'general folder with all groups having FC permissions to it.
    Thats my thoughts - seems a bit messy to me but just interested to hear and confirm that thats just the limitation and way console delegation is supposed to work or if there is a neater way I'd like to know!!
    Cheers - PS I know this descended into a bit of a ramble/discussion in my own head so apologies ;-)

    Hi Stefan, thanks for your reply and suggestion.  What I probably didnt explain, and what I was hoping to achieve in the delegation model was to try and only make visible the folders/runbooks to the relevant operators/user groups.
    The issue probably stems from me having a pretty messy folder structure (generally) and me wanting to hide that mess and confusion from operators who will be new to the console.  Basically I have a high level folder called production which underneath
    that I create neat and tidy folders/runbooks following a good naming convention - only production ready stuff goes in here and this is the focus of what I want to make visible and control access to.  However I also have High level folder for PreProduction
    and Also Testing and within those are a very large number of Folders/runbook which dont follow good naming and can easily loose track when multiple folders are expanded fully.
    So my issue with doing the List permission and let it be inherited down the tree then I assume I will be giving the console user the full (list) view of that structure even if they cant execute and runbooks.
    So is the only way to enforce views/ and run permission to specify explicit permissions accordingly at each level in the tree, ie you can't skip setting folder permissions at some of the in between 'organizational type' folders - eg from my example above
    the ProductionRunbooks->ServiceDeskrunbooks folder/subfolder are just to logically organize the folders containing runbooks such as Exchangerunbooks.  Ideally I would like to set the permissions in such a way that allows the service Desk group to view
    the runbooks at the Ecxhangerunbooks subfolder level.
    Hope that makes sense - I get the feeling to answer is no and the only way to enforce it is to use the multiple groups/explicit permissions at each level in the folder structure.  Happy to be told otherwise!!...

  • Books, Best Practice, CM Guides

    Hi,
    Can anybody point me towards a best practice, SCM Book or any other documentation that gives practical advice on how to set up and use SCM from a Config. Managers point of view - not a developers. The supplied documentation seems to be a bit thin on the ground in this respect.
    I have experience of using other CM tools so I don't need an intro into general CM concepts - but do need something to help me relate this tool into the development lifecycle and release procedures.
    I am also having problems finding documentation on the repository rules.
    Any help appreciated.
    Keith

    Keith,
    I have searched high and low for books on this topic without success. As far as I know, none exists, so I took the Oracle University class Using Oracle Repository for Software Configuration Management. This is a great class taught by Laura Garza who is quite knowledgeable on the subject.
    Although the class does not cover 9i SCM (it uses 6i instead), I found the concepts readily applicable to the SCM. This is a three day class.
    Russ

  • Data Warehouse using MSSQL - SSIS : Installation best practices

    Hi All,
              I am working on a MSSQL - 2008 R2 , based data warehouse building.  The requirement is to read source data from files, put it in stage database and perform data cleansing etc .. and then move the
    data to data warehouse db .. Now the question is about the required number of physical servers and in which server which component (MSSQL , SSIS ) of MSSQL should be installed based on any best practices:
    Store Source files --> Stage database --> data warehouse db
    The data volumne will be high ( per day 20 - 30 k transactions) ... Please suggest
    Thank you
    MSSQL.Arc

    Microsoft documentation: "Use a Reference Architecture to Build An Optimal Warehouse
    Microsoft SQL Server 2012 Fast Track is a reference architecture data warehouse solution giving you a step-by-step guide to build a balanced hardware...Microsoft SQL Server 2012 Fast Track is a reference architecture
    data warehouse solution giving you a step-by-step guide to build a balanced hardware configuration and the exact software setup. 
    Step-by-step instructions on what hardware to buy and how to put the server together.
    Setup instructions  on installing the software and all the specific settings to configure.
    Pre-certified by Microsoft and industry hardware partners for the most optimal hardware and software configuration."
    LINK:
    https://www.microsoft.com/en-us/sqlserver/solutions-technologies/data-warehousing/reference-architecture.aspx
    Kalman Toth Database & OLAP Architect
    IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best practice for GSS design

    Please advice as to what records needs to go in Public DNS server in a scenario where i have url say x.y.com which is listed in the Domain List of the GSS-P, sot that GSS-P or GSS-S can handout the respective external VIP to the clients requesting the url in case one of the GSS/site (GSS_P and GSS-S) goes unavailable
    Please also specify the communication path of a client accessing x.y.com.
    Advice the best practice
    Thanks in advance
    ~EM

    Hi,
    I am new to GSS. I would appreciate if some can help me with the deisgn. I want to know if I need to put the GSS inline after the inernet facing firewall and befor the ACE module. OR use it as one arm mode. Trying to figure out the best fit in the design.
    FWSM1 >>> GSS >>> ACE
    or
    just put the GSS as one arm mode between the FWSM1 >>> ACE
                                                                                                         |
                                                                                                    GSS
    Thanks in advance,
    Nav

  • Best Practice for CTS_Project use in a Non-ChARM ECC6.0 System

    We are on ECC6.0 and do not leverage Solution Manager to any extent.  Over the years we have performed multiple technical upgrades but in many ways we are running our ECC6.0 solution using the same tools and approaches as we did back in R/3 3.1. 
    The future vision for us is to utilize CHARM to manage our ITIL-centric change process but we have to walk before we can run and are not yet ready to make that leap.  Currently we are just beginning to leverage CTS_Projects in ECC as a grouping tool for transports but are still heavily tied to Excel-based "implementation plans".  We would appreciate references or advice on best practices to follow with respect to the creation and use of the CTS_Projects in ECC.
    Some specific questions: 
    #1 Is there merit in creating new CTS Projects for support activities each year?  For example, we classify our support system changes as "Normal", "Emergency", and "Standard".  These correspond to changes deployed on a periodic schedule, priority one changes deployed as soon as they are ready, and changes that are deemed to be "pre-approved" as they are low risk. Is there a benefit to create a new CTS_Project each year e.g. "2012 Emergencies", "2013 Emergencies" etc. or should we just create a CTS_Project "Emergencies" which stays open forever and then use the export time stamp as a selection criteria when we want to see what was moved in which year?
    #2 We experienced significant system performance issues on export when we left the project intersections check on.  There are many OSS notes about performance of this tool but in the end we opted to turn off this check.  Does anyone use this functionality?  Any reocmmendations?
    Any other advice would be greatly appreciated.

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practices for adding CLICK listeners to complicated menus?

    OK, I’m gonna wear out my welcome but here’s my last question of the day:
    I’ve got a project that is essentially a large collection of menus, some buttons common across multiple screens, others unique. The following link is the work in progress, most of the complexity is in the “Star Action Items and Forms” area (btw: the audio in the launch presentation is just a placeholder track, I know we can't use it):
    http://www.appliedcd.com/Be-A-star/Be-A-star.html
    To deal with the large number buttons my timeline simply has the following for every menu frame:
    stop();
    initFrame();
    The initFrame() function then has a list of frames and activates the buttons appearing on each screen, a very simplified example follows. In this example commonButtons span all 3 menus, semiCommonButtons span menu 2 and 3, button1A, button2A, etc… are unique per menu.:
    function initFrame():void {
         var myFrame:String = this.currentLabel;
         commonButton1.addEventListener(MouseEvent.CLICK,onInternalLink);
         commonButton2.addEventListener(MouseEvent.CLICK,onInternalLink);
         commonButton3.addEventListener(MouseEvent.CLICK,onInternalLink);
         switch(myFrame) {
              case "menu1":
                   button1A.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button1B.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button1C.addEventListener(MouseEvent.CLICK,onInternalLink);
              break;
              case "menu2":
                   semiCommonButton1.addEventListener(MouseEvent.CLICK,onInternalLink);
                   semiCommonButton2.addEventListener(MouseEvent.CLICK,onInternalLink);
                   semiCommonButton3.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button2A.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button2B.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button2C.addEventListener(MouseEvent.CLICK,onInternalLink);
              break;
              case "menu3":
                   button3A.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button3B.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button3C.addEventListener(MouseEvent.CLICK,onInternalLink);
              break;
    The way the project was designed, I “thought” menu3 would only be accessible through menu2, thus guaranteeing that the semiCommonButtons would get initialized, but I forgot the functionality of my back button could jump the user directly from menu1 to menu3. The solution is simple, initialize every button on every navigation  target, however, is this really the best way to initialize a bunch of buttons? Another possible approach would be to have an array of button instance names and a function that said: if instance XYZ exists, add listener, then simply loop through the array on every nav target. Anyone with more experience have advice on best practices in this situation?

    Hmmmm just run a test on this whereby I added the above snippet to my master page. I then publish a major version. I can see that every (Welcome) custom page layout has this data widget working, providing I add the div to the page..  
    I wonder if the reason I can't add the snippet  directly  to an individual custom layout page is a bug or am I doing something incorrectly?
    Daniel

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

Maybe you are looking for

  • Won't shutdown or restart unless hard boot - SMCFanControl Application

    I have quite a bit of installed hardware so I thought SMCFanControl would be a good idea. However, it seems that I can reproduce a problem I'm having when utilizing the SMC app. Problem: 1. Won't shutdown or restart unless a power boot is performed 2

  • Deploying ADF Applications on Tomcat

    Hi everybody, I'm wondering if is there any "secure and stable" way for deploying ADF web applications developed with JDeveloper 11g on Tomcat. JDev is amazing, so I hope that deployment is not restricted to Oracle AS. Thank you very much in advance!

  • Where can I find the JCOP tutorial?

    The " Load" APDU command of my card is a little different from that of the JCOP. So if I use JCOP to load applets to my card, there will be error. So I wonder if I can modify the default parameters of APDU command in JCOP. Or is there a JCOP tutorial

  • My new ipad air doesn't allow me to see purchased apps

    I've backed up my old ipad on itunes and set up the new ipad air from the backup but none of the apps have appeared and when I go to the App Store to download them from iCloud the icon for iCloud doens't appear and there are no apps visible in the Pu

  • 1 Extreme, 2 Express extended network - same or different channels?

    Hi Everyone, I'm currently having a Wireless network consisting of an Airport Extreme (5th gen) and two Airport Expresses (current n models). The AE is set to "create a wireless network" on manually selected channels 48 (5GHz) and 1 (2,4GHz). I want