Best strategy to update to LR4.1

With the issuing of the definitive release of LR4.1, I have some cleaning to do, since I do still have LR3.6 installed.
What would be the best strategy if you had both LR3.6 and LR4.1RC2 installed ?
1) Uninstall LR3.6
2) Install LR4.1 and let it replace LR4.1RC2 automatically.
Really, I'd appreciate it if Adobe was disclosing more openly what the installer will do and what are your different options for an update.
Also, after upgrading from LR3.6 to LR4.0, I have made the error allow the catalogue to keep both a copy of process 2010 and a copy for process 2012. So almost everything got duplicated, which in the end was more a hassle than else. I have a very big catalogue and don't want to chase everything one by one manually. Suppose I make sure to write the metadata to all files and then create a new catalogue from scratch, would the new catalogue still include two or more versions of these files ? Or would I get rid of the duplicated process 2010 versions ?  Or is there an easy way to select all those process 2010 version and remove them from the catalogue ?
Thanks in advance for sharing your suggestions or advices.

You can leave everything where it is now. Here is what the installer says on the 2nd screen:
Please note that if Lightroom 4 is already installed on your system, this installer will update it in-place. If the application has been renamed/moved after installation, its new name will be retained. For more detailed information, please see http://go.adobe.com/kb/ts_cpsid_92810_en-us
That means, if you had the LR4.1RC2 installed it will be replaced by LR4 final but the name stays the same. That's a bit confusing, but I suspect it was done to avoid broken application shortcuts etc. If it bothers you, you can rename the application afterwards to Adobe Photoshop Lightroom 4

Similar Messages

  • Best strategy for data definition handling while using OCCI

    Hi, subject says all.
    Assuming to use OCCI for data manipulation (e.g. object navigation), what's the best strategy to handle definitions in the same context ?
    I mean primarily dynamic creation of tables and columns.
    Apparent choices are:
    - SQL from OCCI.
    - use OCI in parallel.
    Did I miss anything ? Thanks for any suggestion.

    Agreeing with Kappy that your "secondary" backups should be made with a different app. You can use Disk Utility as he details, or a "cloning" app such as CarbonCopyCloner or SuperDuper that can update the clone periodically, rather than erasing and re-copying from scratch.
    [CarbonCopyCloner|http://www.bombich.com> is donationware; [SuperDuper|http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html] has a free version, but you need the paid one (about $30) to do updates instead of full replacements, or scheduling.
    If you do decide to make duplicate TM backups, you can do that. Just tell TM when you want to change disks (thus it's a good idea to give them different names). But there's no reason to erase and do full backups; after the first full backup to each drive, Time Machine will back up only the changes made since the last backup +*to that disk.+* Each is completely independent.

  • Best way to update 8 out of10 million records

    Hi friends,
    I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column.
    Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
    How should i approach this one?

    @Mark D Powell
    To give you a background my client faced this problem  a week ago , This is part of a daily cleanup activity .
    Right now i don't have the access to it due to security issue . I could only take few AWR reports and stats when the access window was opened. So basically next time when i get the access i want to close the issue once and for all
    Coming to your questions:
    So what is wrong with just issuing an update to update all 8 Million rows? 
    In a previous run , of a single update with full table scan in the plan with no parallel degree it started reading from UNDO(current_obj=-1 on event "db file sequential read" wait event) and errored out after 24 hours with tablespace full on the tablespace which contains the BLOB data(a separate tablespace)
    To add to the problem redo log files were sized too less , about 50MB only .
    The wait events (from DBA_HIST_ACTIVE_SESS_HISTORY )for the problematic sql id shows
    -  log file switch (checkpoint incomplete) and log file switch completion as the events comprising 62% of the wait events
    -CPU 29%.
    -db file sequential read 6%.
    -direct path read 2% and others contributing a little.
    30 % of the samples "db file sequential read" had a current_obj#=-1 & p1 showing undo file id.
    Is there any concurrent DML against this table? If not, the parallel DML would be an option though it may not really be needed. 
    I think there was in the previous run and i have asked to avoid in the next run.
    How large are the base table rows?
    AVG_ROW_LEN is 227
    How many indexes are effected by the update if any?
    The last column of the primary key column is the only column to be updated ( i mean used in the "SET" clause of the update)
    Do you expect the update will cause any row migration?
    Yes i think so because the only column which is going to be updated is the same column on which the table is partitioned.
    Now if there is a lot of concurrent DML on the table you probably want to use pl/sql so you can loop through the data issuing a commit every N rows so as to not lock other concurrent sessions out of the table for too long a period of time.  This may well depend on if you can write a driving cursor that can be restarted in the event of interruption and would skip over rows that have already been updated.  If not you might want to use a driving table to control the processing.
    Right now to avoid UNDO issue i have suggested to use PL/SQL approach & have asked increasing the REDO size to atleast 10 times more.
    My big question after seeing the wait events profile for the session is:
    Which was the main issue here , redo log size or the reading from UNDO which hit the update statement. The buffer gets had shot to 600 million , There are only 220k blocks in the table.

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • What's the best strategy to implement ads?

    Hello Everyone,
    I’ve inherited a site as the content manager and the
    owners would like to start selling ad space on certain pages.
    Mostly the ad banners will be on the either side of the web pages.
    To see the site with out signing-up this page is public:
    http://www.kidstylesource.com/industry/index.php?option=com_content&task=blogcategory&id=2 7&Itemid=91
    The site is built with Joomla and Dreamweaver. The site is
    fairly removed from the Joomla structure that Dreamweaver will be
    playing a big part here setting up ad banner areas. Joomla has
    it’s own way of running ad banners but as mentioned
    it’s very removed from the Joomla way.
    I’m just wondering what is the best strategy to
    implement the ads with placement on the page, tables and/or div
    with out doing a whole rewrite of each page? The body of the pages
    are a mix of tables and div.
    Also I’d like to think about the future using an ad
    server as I’ve never used one before and don’t know
    what code/structure the ad server is expecting on the website. At
    this time the website is just getting off the ground so I feel an
    ad server is not necessary until traffic picks-up.
    Many Thanks,
    John V.

    Hi Helen,
    Are Form1, Form2 etc five different pages? Are they based on different tables?
    Typically, a tree would be a heirarchical structure (child, parent, grandparent etc) - your structure is more like a simple list.
    Also typically, a report is used as the front-end to a form. A link on the report would move the user to a form that allows them to insert/update/delete data. If the five "forms" are based on different data, I would have five tabs in your app - one for each - and have the front-end report as the main page for each tab.
    Or, perhaps, I'm reading your requirement wrong?
    Andy

  • What is the best strategy to use both Z10 and Q10?

    Assume I have both a Z10 and Q10 and I'd like to use it on alternate days, what's the best strategy to do so?
    BBM should be fine with the same BlackBerry ID and it can just keep switching between the 2 devices.
    If I am using local contacts and calendars, is there an easy switch to keep them sync on both Z10 and Q10?
    There are also other information to sync in Password Keeper, Remember...etc.
    Is there a solution?

    Hello,
    For calendar and contacts, there is this:
    http://supportforums.blackberry.com/t5/BlackBerry-Q10/How-To-OTA-Sync-BB10-and-non-BES-Outlook-Overv...
    With that, I actually keep all of the following in sync:
    Two instances of Desktop Outlook (2007 and 2010)
    Z10
    PlayBook
    Outlook.com
    And, before I decommissioned it, also an Android device. Any device that can synchronize with Outlook.com can use this solution to keep in sync for calendar and contacts. You, of course, need to not use solely local contacts and calendar but instead keep them synchronizing with Outlook.com.
    For the other things you mention, I know of no solutions other than backup/restore...but I do not recall if LINK offers the selective method for those.
    Good luck!
    Occam's Razor nearly always applies when troubleshooting technology issues!
    If anyone has been helpful to you, please show your appreciation by clicking the button inside of their post. Please click here and read, along with the threads to which it links, for helpful information to guide you as you proceed. I always recommend that you treat your BlackBerry like any other computing device, including using a regular backup schedule...click here for an article with instructions.
    Join our BBM Channels
    BSCF General Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • I have Lightroom 5 from an online update to LR4. My computer is ill and needs a system wiope and re-installation of programs. Can I reinstall LR5 through Adobe after the wipe

    As stated I operate LR5 as an update from LR4.  LR 5 was installed as an downloaded update from Adobe ( via an amazon purchase )
    My laptop is ill having nearly filled my HDD.   I need to wipoe my HDD and begin again to gain some storage.  Is there any way I can re-install LR5 from ADOBE as part of the original purchase agreement ?
    Many thanks
    Brian Mordew

    You can download thru the following page and use your serial number to activate it...
    Lightroom - all versions
    Windows
    http://www.adobe.com/support/downloads/product.jsp?product=113&platform=Windows
    Mac
    http://www.adobe.com/support/downloads/product.jsp?product=113&platform=Macintosh

  • Best strategy for variable aggregate custom component in dataTable

    Hey group, I've got a question.
    I'd like to write a custom component to display a series of editable Things in a datatable, but the structure of each Thing will vary depending on what type of Thing it is. So, some Things will display radio button groups (with each radio button selecting a small set of additional input elements, so we have a vertical array radio buttons and beside each radio button, a small number of additional input elements), some will display text-entry fields, and so on.
    I'm wondering what the best strategy for tackling this sort of thing is. I'm sort of thinking I'll need to do something like dynamically add to the component tree in my custom component's encodeBegin(), and purge the extra (sub-) components in encodeEnd().
    Decoding will be a bit of a challenge, maybe.
    Or do I simply instantiate (via constructor calls, not createComponent()) the components I want and explicitly call their encode*() and decode() methods, without adding them to the view tree?
    To add to the fun of all this, I'm only just learning Faces (having gone through the Dudney book, Mastering JSF, and writing some simpler custom components) and I don't have experience with anything other than plain vanilla JSP. (No EJB, no Struts, no Tapestry, no spiffy VisualDevStudioWysiwyg++ [bah humbug, I'm an emacs user]). I'm using JSP 2.0, JSF 1.1_01, JBoss 4.0.1 and JDK 1.4.2. No, I won't upgrade to 1.5 (yet).
    Any hints, pointers to good sample code? I've looked at some of the sample code that came with the RI and I've tried to navigate the JSF Blueprints stuff, but I haven't really found anything on aggregating components into a custom component. Did I miss something obvious?
    If this isn't a good question, please let me know how I can sharpen it up a bit.
    Thanks.
    John.

    Hi,
    We're doing something very similar. I had a look at the Tomahawk Date component, and it seems to dynamically created InputText components in the encodeEnd(). However, it doesn't decode this directly (it only expects a single textual value). I expect you may have to check the request yourself in decode().
    Other ideas would be appreciated, though - I'm still new to JSF.

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best Strategy?: Lion Upgrade, keeping SnowLeopard as well

    Intent:
    I want to upgrade an older iMac (Intel) to Lion
    I  am running SnowLeopard.
    I want to keep Snow Leopard for some apps. (Too many very expensive apps that may have issues on Lion).
    Question: What's my best strategy to upgrade to Lion while keeping SnowLeopard on one iMac?
    I've done this in the past, but am out of touch with best current practice.
    I doubt its changed, but it never hurts to ask!
    I'd wait for Mountain Lion, but I believe that won't be available until after MobileMe goes away. Don't know how it will run on a 2GHz Core 2 Duo.
    I have limited room available on my iMac, in the past I'd run on a tower, but that's not the case today.
    This is really about adding iCloud to MobileMe.
    The iPad 3 has another acct - I don't want to lose the email addresses, etc which are part of MobileMe
    & I don't think I can justify purch of a new machine before the changeover.
    Sorry, if that's a bit of a complicated scenario. This could actually fit with MobileMe forum...

    Thanks much. Sounds like the same situation as always - partition.
    I have 2 partitions now, one as an NTFS partition to run Windows when I must, as a hard boot.
    That one, I think requires reformatting the entire drive, but I should be able to slide another one
    out of the current Snow Leopard partition as its HFS+, I think.
    Or else have to clone it to an external drive before repartitioning... (not my favorite option).
    My problem is that space issue.
    I'm on an iMac as my main machine these days.

  • What is the best procedure to update from Leopard (or Snow Leopard) to Mavericks?

    The pacient:
    Macbook Pro A1278
    Running 10.5.8 -- but have an Update DVD to 10.6 (Snow Leopard)
    Got slower in start-up
    Very slow in power-off (takes lot of time on shut-down!)
    What is the BEST procedure to update it from Leopard to Mavericks?
    A) update & clean
    Backup to Time Machine (in Leopard)
    Update from 10.5.8 to 10.6 (Snow Leopard)
    Update 10.6 to 10.6.8
    Donwnload and Update to Maverics
    Do any kind of clean to increase speed in start-up / shut-down - Any suggestions?
    or
    B) fresh install & update & recover Time Machine
    Backup to Time Machine
    Reinstall a fresh 10.5.8 (Leopard) - should increase speed in start-up / shut-down ??
    Update from 10.5.8 to 10.6 (Snow Leopard)
    Update 10.6 to 10.6.8
    Donwnload and Update to Maverics
    Recover users data & config from Time Machine
    I am thrilled to heard your advices!!
    PS. Aditional suggestions to make a faster boot / power-down , before or after updating are very welcome ;!)

    Go to  Menu > About this Mac > and tell us Version, Processor & Memory specs on your Mac. Also available hard disk space, by choosing your Macintosh HD and "Get Info" (cmd-i)
    If you're having issues now with slow sratup and shutdown, it's probably a third party item. If it is, then it may limited to your user account. If that's the case and you do a clean install, then migrate you will wind up migrating it right back.
    The first thing to check would be to boot into your Guest account and test, or try starting in Safe Mode and see if the problem still occurs?
    Restart holding the "shift" key.
    (Expect it to take longer to start this way because it runs a directory check first.)
    If this works look in System Preferences > Users & Groups > Login items and delete any third party login items.
    Also look in /Library/Startup Items. Nothing is put in that folder by default, so anything in there is yours. Then log out and back in or restart and test.
    If the problem is sorted, be sure to make a new backup before proceeding. Since the problem is sorted and you have a backup without the offending items, then there's no reason not to use the simple upgrade method you outlined in A. With the exception of "cleaning". The only cleaning your Mac should need is with a soft cloth. Stay away from so-called cleaning/optomizing utilities.

  • What is the best strategy to save a file in client computer

    I want to save some information in a file in client computer. What is the best strategy to do? There are some ways I can think about. But none of them is good enough for me.
    1. I gave all-permission. So, I can actually write what I want. But, in order to make the program runs on all platform/all client computers, I can't make any assumptions on file system of client computer. So, this is not good.
    2. I can write a file into .javaws directory. But, how can I get file path for this directory? JNLP API does not give this path to us. I can't think a way to get this path for all client computer (WIndown, Mac, Unix).
    3. To write as a muffin. Seems fine. But, I often change server and path. So, once I changed server, the client will loss the file saved since muffin is associated with server and path.
    4. I can just open one file with on path. I think J2SE will treat this file platform dependently. For example, for W2K this file will be put into Desktop. This is bad.
    Any better idea?

    In the past I have used the Properties class to do things like this. Using load and store, you can read and write key=value pairs.
    I store the file in the user.home directory. You can use System.getProperty("user.home") to get this location.
    No guarantees, but I thought that this user.home property was good for any OS with a home directory concept. If that turns out not to be true, maybe the System property java.io.tmpdir would be more consistent across platforms. This, of course, would be subject to delete by the OS/administrators.
    -Dave

  • Best way to update a solaris jumpstart OS image.

    Hi all,.
    Ive been recently building some v240's but have run into trouble with the rather out of date 02/02 instance of Solaris 8 (yes, i did say solaris 8 - it's a political thing..)
    Anyhow, I have cd images of Solaris 8 02/04 and have a copy of the Sun Blueprints Jumpstart book by "Howard and Noordergraph".
    On page 92, it says to use the "setup_install_server" script with the -b option for /jumpstart/OS/Solaris-xx-xx-xx.
    Ive done that without any probs, then it goes on to say if you want an install server do the same command again without the -b switch.
    Problem is that it spews out this message.
    733 root&#64;bbs00080 # ./setup_install_server -b /jumpstart/OS/Solaris_8_02-2004/
    Verifying target directory...
    setup_install_server:
    The target directory /jumpstart/OS/Solaris_8_02-2004/ is not empty. Please choose an empty
    directory or remove all files from the specified directory and run
    this program again.
    So i chose an empty directory and it goes and does it.
    Is this an errata?
    I already have /jumpstart/OS/Solaris_8_02-2002/ and I wanted to update the files so that I can boot the v240r.
    Except 02/02 doesnt let me do that as it doesnt support the v240r arch.
    Sun told me this, but I'm lost because my existing profile for this box, lists all the packages I want to install, but it whinges about not being able to find a .cdrom toc file.
    I already have three flash archives which i created from good builds from 02/02, in my rules file which work, but this pesky update for the 240r is getting to be a little tricky.
    So far, ive tried copying all the packages from 02/02/Products directory into 02/04/Products but some of them don't install despite being there.
    Can anyone suggest the best way to update my 02/02 with the 02/04 boot loader for the v240?
    Thanks in advance and sorry for any confusion.
    D.

    I'm sorry. I meant. Say my array of x positions is 3, 4, 9. I change 9 to 30. How can I make the polygon see that change?

  • Best way to update individual rows of a Table?

    I've taken a look at some examples, though haven't gotten any clarification on this.  I am looking to have something close to a listbox or table to where I can update just a single column of row values at a 1 time per second pace.  I am looking to display our data-acquisition values in a table or listbox.  The single listbox seemed to work good for this, but I was unable to use row headers to list the channel names next to the channel values.  I was thinking about connecting the cursor values of two list-boxes to do this, but didn't find any info on this for the single list-box.
    I have a few questions:
    1) I have a 1D array to where I want to use that array of data to constantly update the first column (with a multitude of rows) of a table.  I am looking for the best route so as not to take up too much processing time in doing this.
    What is the best way to update individual rows of a table?   Invoke Node "Set Cell Value" ... or is there another method?
    2) Why is it that after every other iteration the row values are erased? 
    Also, for adding additional strings to the original arrray ... is it best to use the "Array Subset" and then the "Build Array" function, or the "Array Subset" and "Insert Into Array" function?
    See the attached example.
    Thanks.
    Solved!
    Go to Solution.
    Attachments:
    Table Example.vi ‏19 KB

    Jeff·Þ·Bohrer wrote:
    2) Why is it that after every other iteration the row values are erased?
    Classic race condition.  dump the for loop and p-node and just wire the 2D array to the table terminal.!
    I'm not seeing the race condition.  What I am seeing is the table emptying after the last element was written to it on every other run.  I saw watched this with highlight execution on.
    But I'm in full agreement with just writing to the terminal.  It is a 1D array, so you will need to use a build array and transpose 2D array in order for it to write properly.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

Maybe you are looking for

  • SAP Query: InfoSet Filter

    Hey experts, I have an Infoset that contains the JEST table (status of object) and TJ02T (text of the status). In my query result I only want records in 1 language. I know I can make a selection criteria on language with a default value. But now I wa

  • How BDLS works exactly?

    Hello, I've run BDLS report on my system after the system refresh and by coincidence I found following fact: In table EDP13 there are two columns that contain logical names - RCVPRN and RCVPOR. But only RCVPRN column has been modified by BDLS report.

  • Solve safari 6 problem with opening web pages

    Note the steps to do so at the end of each stage try to test Safari, this training is complete and to resolve all problems. Go to disk utilities and run repair disk permission . go to Keychains and go to the first aid, then run the repair option. Fro

  • Schedule to Save the application report

    Hi, I have a application with a report in it..... I would like to schedule a process to save a copy of the report everyday at night to users common drive or shared drive..... Please let me know if we can schedule to save the application report data..

  • Re-downloading iTunes

    I have recenty received a new computer, and have had to install iTunes again. When I plug in my iPod the songs show up as being on my Ipod, but there is no way I can get them into my library, I can't even play the songs on my computer, I'm afraid to