Best Practice for Updating Administrative Installation Point for Reader 9.4.1 Security Update?

I deployed adobe reader 9.4 using an administrative installation point with group policy when it was released. This deployment included a transform file.  It's now time to update reader with the 9.4.1 security msp.
My question is, can I simply patch the existing AIP in place with the 9.4.1 security update and redeploy it, or do I need to create a brand new AIP and GPO?
Any help in answering this would be appreciated.
Thanks in advance.

I wouldn't update your AIP in place. I end up keeping multiple AIPs on hand. Each time a security update comes out I make a copy and apply the updates to that. One reason is this: when creating the AIPs, you need to apply the MSPs in the correct  order; you cannot simply apply a new MSP to the previous AIP.
Adobe's support patch order is documented here:  http://kb2.adobe.com/cps/498/cpsid_49880.html.
That link covers Adode  Acrobat and Reader, versions 7.x through 9.x. A quarterly update MSP can  only be applied to the previous quarterly. Should Adobe Reader 9.4.2  come out tomorrow as a quarterly update, you will not be able to apply it to  the 9.4.1 AIP; you must apply it to the previous quarterly AIP - 9.4.0. At a minimum I keep the previous 2 or 3 quarterly AIPs around, as well as the MSPs to update them. The only time I delete my old AIPs is when I am 1000% certain they are not longer needed.
Also, when Adobe's developers author the MSPs they don't include the correct metadata entries for in place upgrades of AIPs - any AIP based on the original 9.4.0 MSI will not in-place upgrade any installtion that is based on the 9.4.0 MSI and AIP - you must uninstall Adobe Reader, then re-install. This deficiency affects all versions of Adobe Reader 7.x through 9.x. Oddly, Adobe Acrobat AIPs will correctly in-place upgrade.
Ultimately, the in-place upgrade issue and the patch order requirements are why I say to make a copy, then update and deploy the copy.
As for creating the AIPs:
This is what my directory structure looks like for my Reader AIPs:
F:\Applications\Adobe\Reader\9.3.0
F:\Applications\Adobe\Reader\9.3.1
F:\Applications\Adobe\Reader\9.3.2
F:\Applications\Adobe\Reader\9.3.3
F:\Applications\Adobe\Reader\9.3.4
F:\Applications\Adobe\Reader\9.4.0
F:\Applications\Adobe\Reader\9.4.1
The 9.4.0 -> 9.4.1 MSP is F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp
When I created my 9.4.1 AIP, I entered these at a cmd.exe prompt - if you don't have robocopy on your machine you can get it from the Server 2003 Resouce Kit:
F:
cd \Applications\Adobe\Reader\
robocopy /s /e 9.4.0 9.4.1
cd 9.4.1
rename AdbeRdr940_en_US.msi AdbeRdr941_en_US.msi
msiexec /a AdbeRdr941_en_US.msi /update F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp /qb

Similar Messages

  • Best practice for read-only functionality

    Hi,
    I'm part of the development team of a system with about 100 screens. The customer would like us to add some read-only functionality to the system, so that certain users are able to access the screens but not change any of the data on them. We already have policies in place on the database level that keeps read-only users from saving data, but it's not very user friendly to allow users to change data on a screen, only to tell them that they're not allowed to save those changes once they try to do so. It would clearly be better if all components are rendered as read-only components for read-only users, making them unable to make any data changes in the first place.
    User privileges in the system are controlled by roles defined and set in the system (not ADF roles or Weblogic roles). At any given time and place, it's possible to check whether the current user has a certain role. We already use this in a number of places to make it possible to control which user has access to which screens. In a few places we even control which functionality should be enabled for the current user within a screen, but mostly the access control is currently on the screen level. With read-only users getting access to all screens, it seems we will need lot of extra in-screen access control to keep these users from changing anything.
    But what's the best practice here? One way to go would be to add some logic to every single active component on every single screen, to determine whether it should be rendered as active or disabled/read-only. But that would require a lot of extra coding.
    So my question is: Is there a smarter way to do this? Maybe something done through skinning? Or something else?
    (I'm not sure how relevant this is for this sort of question, but we're currently using JDev 11.1.1.4.0, and expect to upgrade to 11.1.1.6.0 within the next 6 months)
    Best regards,
    Andreas

    Hi Guna, Puthanampatti and Don,
    Thanks a lot for your replies. I'm currently looking into implementing something along the lines of what Guna has suggested:
    Our application consists of a number of individual work spaces that are deployed as adflibs which have all been added to a "master application work space", and the master application is deployed as an .ear file. Most of the individual work spaces are for all the different functional areas of the application, with their own task flows, page fragments etc. The rest are work spaces with common functionality, like datamodel (entity definitions), utility methods, page templates, and framework extensions. In the latter, we have defined custom classes for all the base classes (somewhat similar to what Don describes, I believe).
    In our custom class for ViewRowImpl, I have added an isAttributeUpdateable method, and in our custom class for ApplicationModuleImpl I have added an isReadOnlyUser method. The isAttributeUpdateable method uses the isReadOnlyUser method to determine if the current user is a read-only user or not; if the user is a read-only user the isAttributeUpdateable method will return false, otherwise true. The isReadOnlyUser method in our base class is just a dummy method that always returns true. But in the ApplicationModuleImpl classes of our individual work spaces, i've written an override for isReadOnlyUser, giving the answer that is relevant for the work space in question (for instance, whether or not the current user has the role "User Administrator").
    That pretty much takes care of all input fields in tables and forms, which is a big step in the right direction. This still leaves some work to be done for components that are not directly linked to view object attributes (like buttons), but I guess that can't be helped. Also, there are a few of the work spaces that contain a number of pages that are related to different user privileges (as in: page 1 requires user privilege A, and page 2 requires user privilege B); in these cases I will have to do something different than just writing an override in the "local" ApplicationModuleImpl class.
    @Don: What you describe seems to be pretty close in functionality to what we already have, though your implementation is different from ours. You have used your custom base ApplicationModuleImpl class to keep read-only users from committing changes. We use Virtual Private Database and database policies to the same end: If a user without the required full-access role tries to commit data it will cause a database error, which we then handle in the application (so the user gets a message like "You don't have the required privileges to changes this data", rather than an ORA message). Unfortunately, our customers are not content with this. They want a solution where all input fields and most of the buttons etc. are disabled for read-only users, and that's why I'm looking into the best/smartest way to do this.
    @Puthanampatti: We already use something similar to what you're suggesting. The challenge I'm currently facing is how best to disable/enable components based on the current user's roles, not how to determine and store those roles.
    Best regards,
    Andreas

  • Best practice for reading GUI values?

    I am using a Matisse GUI. This GUI has a number of variables and buttons that will invoke methods from other classes. How do I pass or otherwise make the variables in the GUI jText... jLabel... to the receiving function without passing each variable individually?

    You should define objects for each Panel.
    For example, your application will consist of a lots of Panels.
    Each panel will have 2 methods. One method will accept
    an object, and set all the GUI fields. One object will
    return a new object based on values in the GUI fields.
    E.g.
    public class CirclePanel extends JPanel {
         JLabel radiusLabel = new JLabel("Circle Radius");
         JTextField radiusTF = new JTextField();
         public CirclePanel() {
              // .. layout GUI components
         public void setFields(Circle c) {
              radiusTF.setText(c.radius);
         public Circle getCircle() {
              return new Circle(Double.parseDouble(radiusTF.getText()));
    }Now, let's say you want to have this Panel in a dialog.
    You should put your ok and cancel button in the dialog.
    Don't put the buttons in the panel because you may want
    to use that panel somewhere else that doesn't need the
    buttons.
    public class CircleDialog extends JDialog implements ActionListener {
         JButton okButton = new JButton("OK");
         JButton cancelButton = new JButton("Cancel");
         CirclePanel panel = new CirclePanel();
         ActionListener a = null;
         Object src = null;
         public CircleDialog() {
              // ... add circle panel
              // ... add buttons
              okButton.addActionListener(this);
              cancelButton.addActionListener(this);
         public void actionPerformed(ActionEvent evt) {
              a.actionPerformed(evt);
         public Object getSource() {
              return src;
         public void show(Object src, ActionListener a) {
              this.a = a;
              this.src = src;
              super.show();
    }Suppose you have a menubar that launches the dialog:
    public clas CircleMenuBar extends JMenuBar implements ActionListener {
         JMenuItem circleItem = new JMenuItem("Make Circle");
         public void actionPerformed(ActionEvent evt) {
              Object src = evt.getSource();
              if (src == circleItem) {
                   circleDialog.show(src, this);
              else if (src == circleDialog.okButton) {
                   Circle c = circleDialog.panel.getCircle();
                   // do something with it
                   // like call CircleAPI.add(c); // adds circle to database
                   circleDialog.close();
              else if (src == circleDialog.cancelButton) {
                   circleDialog.close();
    }This architecture is basically allow your dialogs to be non-modal.
    Typically for a GUI application, I create a Dialogs class that has
    all the dialogs the GUI uses. The dialogs are created only if the
    user asks for the dialog. Basically the singleton pattern
    for each dialog.

  • OSB: What is best practice for reading configuration information

    I'm trying to establish the best way to utilise configuration information within OSB. At present I have a standard Java style properties file and Java callout to get certain information and I have also been experimenting with Dynamic Routing using an XML based Xquery resource.
    The latter appears to me the best approach and is described on page 3-40 of the OSB User Guide but it appears to be configurable only from the OSB console and not as a resource from within Workshop. Ideally I'd like to configure an XML resource file that is loaded from disk at OSB startup and then be able to use that via an XQuery.
    How have other people addressed the issue of getting access to parameter/control values at run time?

    Can you please elaborate a little on what you mean by making it available "via a OSB Server URL" What I have done:
    1) Created simple web application in domain directory.
    Sample web.xml
    <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5">
    <display-name>Sample</display-name>
    <welcome-file-list>
    <welcome-file>properties.xml</welcome-file>
    </welcome-file-list>
    </web-app>
    2) Deploy this web application in exploded format using weblogic console(http://localhost:7001/console) in the OSB domain instance.
    3) properties.xml is available via http://localhost:7001/Sample
    4) Use the above URL in your service callout.
    and how this avoids File IO and how it is cached? I'm not sure what you mean here.If we are using Java callout , we have to create File and close the File handle every time. There is this over head in creating/reading file in java every time java callout is used. This over head can be avoided if the XML is available over http. I'm quite sure that weblogic server will cache this XML file by default (What use is webserver if it cannot cache static content?), as this is not dynamic content. Weblogic Server forums will be the right place to get conformation on how to configure caching with web application..
    Thanks
    Manoj
    Edited by: mneelapu on Jul 1, 2009 8:46 AM

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices for SRM Installation !!

    Hi
        can someone share the best Practices for SRM Installation ?
    What is the typical timeframe to install SRM on development server and as well as on the Production server ?
    Appericiate the responses
    Thanks,
    Arvind

    Hi
    I don't know whether this will help you.
    See these links as well.
    <b>http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm
    http://help.sap.com/bp_scmv150/index.htm
    http://help.sap.com/bp_biv170/index.htm
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm</b>
    Hope this will help.
    Please reward suitable points.
    Regards
    - Atul

  • Best Practice for SUP and WSUS Installation on Same Server

    Hi Folks,
    I have a question, I am in process of deploying SCCM 2012 R2... I was in process of deploying Software Update Point on SCCM with one of the existing WSUS server installed on a separate server from SCCM.
    A debate has started with of the colleague who says that the using remote WSUS server is recommended by Microsoft because of the scalability security  that WSUS will be downloading the updates from Microsoft and SCCM should be working as downstream
    server to fetch updates from WSUS server.
    but according to my consideration it is recommended to install WSUS server on the same server where SCCM is installed... actually it is recommended to install WSUS on a site system and you can used the same SCCM server to deploy WSUS.
    please advice me the best practices for deploying SCCM and WSUS ... what Microsoft says about WSUS to be installed on same SCCM server OR WSUS should be on a separate server then the SCCM server ???
    awaiting your advices ASAP :)
    Regards, Owais

    Hi Don,
    thanks for the information, another quick one...
    the above mentioned configuration I did is correct in terms of planning and best practices?
    I agree with Jorgen, it's ok to have WSUS/SUP on the same server as your site server, or you can have WSUS/SUP on a dedicated server if you wish.
    The "best practice" is whatever suits your environment, and is a supported-by-MS way of doing it.
    One thing to note, is that if WSUS ever becomes "corrupt" it can be difficult to repair and sometimes it's simplest to rebuild the WSUS Windows OS. If this is on your site server, that's a big deal.
    Sometimes, WSUS goes wrong (not because of ConfigMgr)..
    Note that if you have a very large estate, or multiple primary site servers, you might have a CAS, and you would need a SUP on the CAS. (this is not a recommendation for a CAS, just to be aware)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

  • IOS Update Best Practices for Business Devices

    We're trying to figure out some best practices for doing iOS software updates to business devices.  Our devices are scattered across 24 hospitals and parts of two states. Going forward there might be hundreds of iOS devices at each facility.  Apple has tools for doing this in a smaller setting with a limited network, but to my knowledge, nothing (yet) for a larger implementation.  I know configurator can be used to do iOS updates.  I found this online:
    https://www.youtube.com/watch?v=6QPbZG3e-Uc
    I'm thinking the approach to take for the time being would be to have a mobile sync station setup with configurator for use at each facility.  The station would be moved throughout the facility to perform updates to the various devices.  Thought I'd see if anyone has tried this approach, or has any other ideas for dealing with device software updates.  Thanks in advance. 

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best practice for the Update of SAP GRC CC Rule Set

    Hi GRC experts,
    We have in a CC production system a SoD matrix that we would like to modified extensively. Basically by activating many permissions.
    Which is a best practice for accomplish our goal?
    Many thanks in advance. Best regards,
      Imanol

    Hi Simon and Amir
    My name is Connie and I work at Accenture GRC practice (and a colleague of Imanolu2019s). I have been reading this thread and I would like to ask you a question that is related to this topic. We have a case where a Global Rule Set u201CLogic Systemu201D and we may also require to create a Specific Rule Set. Is there a document (from SAP or from best practices) that indicate the potential impact (regarding risk analysis, system performance, process execution time, etc) caused by implementing both type of rule sets in a production environment? Are there any special considerations to be aware? Have you ever implemented this type of scenario?
    I would really appreciate your help and if you could point me to specific documentation could be of great assistance. Thanks in advance and best regards,
    Connie

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Best practice for installation oracle 11g rac on windows 2008 server x64

    hello!
    can somebody tell me a good book or an other kind of literature regarding "best practice for installation oracle 11g rac on windows 2008 server x64"? thx in advance!
    best regards,
    christian

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

Maybe you are looking for