Best practice to update inline/publish folio?

Hi there
Think all is in my question
I have an online application with an online folio and I need to update the same folio with a new version.
What is the best practice to organize my work ?
DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
What is the best practice to organise me/my work/my file ?
Thank U

Hi there
Think all is in my question
I have an online application with an online folio and I need to update the same folio with a new version.
What is the best practice to organize my work ?
DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
What is the best practice to organise me/my work/my file ?
Thank U

Similar Messages

  • Best practice for exporting a dps folio so a third party can work on it?

    Hi All,
    Keen for some thoughts on the best practice for exporting a dps folio, indd files, links and all, so a third party can carry on the work. Is their a better alternative to packaging each individual indd file and sharing the folio through adobe?
    I know there have been similar questions here in the past, but the last (that i've found) was updated mid 2011 and I was wondering if their have been any improvements to this seemingly backwards workflow since then.
    Thanks in advance!

    Nothing better than packaging them and using Dropbox to share. I caution you
    on one thing as far as packaging:
    http://indesignsecrets.com/file-packaging-feature-can-cause-problems-in-dps-
    workflows.php

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Is there a best practice for deleting a published report?

    Post Author: matthewh
    CA Forum: General
    Is there a best practice for deleting a published report from C:\Program Files\Business Objects\BusinessObjects Enterprise 11.5\FileStore on Crystal Reports Server or can I just delete the subfolders?  Does it reference them elsewhere?  I have a load of old reports I need to shed, but can't see how to do it.

    Hi,
    You can refer the SRND guide. As per document (page -292)
    you can configured -You can add a maximum of 50 agents per team.
    http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ippcenterprise9_0_1/design/guide/UCCE_BK_S06086EC_00_srnd-9-0-1.pdf
    Also you can check the Bill of Material document of UCCE , under the section "Operating Conditions, Unified ICM, Unified CC" What are the number should configure in UCCE.

  • EBS Supplier best practice to update vendor site code, update or create a new one

    I have a question related to EBS Supplier vendor site code. Application lets you update the vendor site code, but what is the best practice to update the site code?....would you inactivate the exiting one and create a new one? or would you just update the existing value?

    Ok,
    My workaround was to put in my TaskFlow an action to commit. After that I put two more actions (execute) and then back to my page. This way works but I would like to know if there is any more efficient way to do this just when I am inserting.
    Regards

  • Updating already published folio

    Hello.
    I am not very sure, how to update already published folios. Yes, i know, there is a update button DPS Folio Produser
    But, if i upload folio to my user account, then share it to application acount for distribution? In application account i copy my already shared folio and publish it.
    After updating content in my account the content of the updated folio is not transferred to the published folio in application account.
    Must i copy the folio again and publish it again? And my clients must download that folio again?
    I tried it and somehow my clients began to see 2 copies of the magazine in distribution app.
    Ants

    iGeo suggestion is the way to go in your case.
    1. Have your freelancer share the corrected folio with your(application account)
    2. Open the shared folio
    3. "Copy to" the corrected article into your live folio
    4. Delete the outdated article from the live folio
    5. Re-arrange the articles to order
    6. Hit the "Update" button on your folio producer dashboard
    You should now see the changes in your live folio

  • Best Practice for BEX Query "PUBLISH to ROLE"?

    Hello.
    We are trying to determine the best practice for publishing BEX queries/views/workbooks to ROLEs. 
    To be clear of the process I am referring: from the BEX Query Designer, there is an option QUERY>PUBLISH>TO ROLE.  This function updates the user menu of the selected security role with essentially a shortcut to the BEX query.  It is also possible to save VIEWS/WORKBOOKS to a role from the BEX Analyzer menu.  We have found ROLE menus to be a good way to organize BEX queries/views/workbooks for our users. 
    Our dilemma is whether to publish to the role in our DEV system and transport to PROD,... or if it is ok to publish to the role directly in the PROD system.
    Publishing in DEV is not always possible, as we have objects in PROD that do not exist in DEV. For example, we allow power users to create queries directly in PROD.  We also allow VIEWS and WORKBOOKS to be created directly in PROD.  It would not be possible to publish types of objects in DEV. 
    Publishing in PROD eliminates the issues above, but causes concerns for our SECURITY team.  We would be able to maintain these special roles directly in PROD.
    Would appreciate any ideas, suggestions, examples of how others are handling this BEX publish-to-role process.
    Thank you.
    -Joel

    Hi Joel,
    Again as per the Best Practices.Nothing to be created in PRD,even if we create them in PRD for Power users its assumed as temprory and can be deleted at any time.
    So if there are already deviations then you can go for deviations in this case as well but it wont be the Best Practice.Also in few cases we have workbooks created in PRD as they cud nt be created in DEV due to various reasons...in such cases we did not think of Best Practice ,we had a raised an OSS on this aswell.
    In our Project,we have done everything in DEV and transported to PRD,in case there were any very Minor changes at query level we have done in PRD and immedialtely replicated the same in DEV so that they are in SYNC.
    rgds
    SVU

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practice to update to Snow Leopard

    I just placed my family pack order on Amazon.com for Snow Leopard. But this will be the first time for me doing an OS upgrade on a Mac (all 4 Macs in my house came with Leopard on them so we've only done the "software update" variety). I am a reformed PC guy so humor me!
    What is the best practice to upgrade from 10.5.8 to 10.6? On a PC, my inclination would be to back up my data, reformat the whole drive and install Windows fresh... then all my apps.. then the data. I hate that and it takes hours.
    What is the best practice way to upgrade the Mac OS?

    The best option is Erase and Install. The next best option is Archive and Install. Use the latter if you do not want to or can't erase your startup volume.
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6
    Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions.
    FYI I will want to install Rosetta for older applications, CS3 & CS4, that I need for old client files. Thanks.
    Ali

    Buy one. Anything you want to keep shouldn't be on only one drive; problems may occur at any time, and are particularly likely to occur during an OS update or upgrade.
    (78403)

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Best practices for archiving a published app?

    So say you've finally published your (single-folio) app with DPS.
    What's the best way to deal with that folio now, in the Folio Builder panel? (I made what was probably a mistake of building the final version using my primary dps account, not one set up only for the publication.)  I've locked it in Folio Producer, which I assume will prevent me from accidentally doing anything to it -- though I haven't tested it.
    With a print publication, I'd be archiving everything and setting aside my final files. What's a good thing to do with these? (This points out some more big  gaps in DPS workflow, such as organizing your folio builder and a "package" or "archive" function.)
    Any stories of what you've done to safely archive would be appreciated.
    Thanks

    If you organized your folio properly and created a sidecar.xml file, you can delete the folio and archive the source files. If you ever need to re-create the folio, it should be easy enough to import the folio. You wouldn't want to do this with folios in a multi-issue app.
    As you suggest, it would be nice to be able to hide published folios from view in the Folio Builder panel and Folio Producer Organizer.

Maybe you are looking for

  • Oracle 9i Lite Issue with ODBC driver

    Cannot reference table in user schema without schema ref I create a user TESTUSR and a table TESTTBL, and when I log in with user TESTUSR in SQL*Plus, the following will work just fine. SELECT * FROM TESTTBL but then when my ASP on IIS does the same

  • Total in order documents

    Hi, Im trying to add an order document using the DI API.My problem is that i can't display the DocTotal and the LineTotal ...i select these properties after the add() , and it displays 0 always ... here is my code: long RetVal; int ErrCode; string Er

  • HOW TO DELETE  A  PUSHBUTTON  FROM  TOOLBAR

    hi,      I am writing a program wherein I am required to show 4 pushbuttons on the report screen,when i click on either of the 4 pushbuttons the remaining 3 pushbuttons wil remain there only but the pushbutton i clicked will be hidden.     is there a

  • Loading errors!

    My Int data load for a BW stat cube got failed with error 'caller 70' is missing. This is a production issues. I checked my ST22 and got the following details; Runtime Errors DBIF_RSQL_INTERNAL_ERROR Internal error when accessing a table. DBIF_RSQL_I

  • There is no storage of bookmarks in mountain lion

    Hello, i Need Somerset help, because I cannot store any bookmarks in Safari. My OS is Mountain Lion. I do no private surfing and I am a newbie in Mac. Thanks for any help