Quick Question: CS6 Installation Best Practice

Hi Guys
I have CS5 and CS5.5 Master Collection running on my PC (Win7 64bit SP1, Intel Core i7 2.67ghz, 24 gig RAM) and I've just taken ownership of the CS6 upgrade. When I loaded MC CS5.5 I had a bunch of errors, which turned out to be related to the update not needing to replace certain existing components of CS 5.
Should I just insert the disk and run or is there anything I should do to prepare this time? Any advice is welcome. I'd like to avoid uninstalling the previous versions but if that's the advice, I'll run with it.
Regards,
Graham

If you need the flash builder and is very important for you then ,
Go to control panel and Uinstall CS5.5 , when you start the uninstall the screen with all the product name will come up , just check flash builder and uncheck everything , It would remove just flash builder.
But forget it f you dont need it or doesn't use it.

Similar Messages

  • "Installation best practices." Really?

    "Install Final Cut Pro X, Motion 5, or Compressor 4 on a new partition - The partition must be large enough to contain all the files required by the version of Mac OS X you are installing, the applications you install, and enough room for projects and media…"
    As a FCS3 user, if you were to purchase an OS Lion Mac, what would your "Installation best practices" be?  It seems the above recommendation is not taking into consideration FCS3s abrupt death, or my desire to continue to use it for a very long time.
    Wouldn't the best practice be to install FCS3 on a separate partition with an OS that you never, ever update?   Also, there doesn't appear to be any value added to FCS with Lion.  That's why I would be inclined to partition FCS3 with Snow Leopard -- but I'm really just guessing after being thrown off a cliff without a parachute.
    Partitioning… does this mean I'll need to restart my computer to use FCS?  What about my other "applications"? Will I be able to run Adobe Creative Suite off the other partition, or is the "best practice" to install a duplicate of every single application I own on the FCS partition?
    Note: This is not to say I'll never embrace FCX. But paying (with time & money) to be a beta tester just isn't gonna happen.  If it's as easy to use as claimed, I'm not falling behind, as has been suggested by some. I'm just taking a pass on the early adopter frustration.

    Okay, but are you not concerned with future OS updates that may render FCS3 useless?  Perhaps our needs are different, but I want and need FCS3 to continue to work in the future.
    That "best practices" link up at the top of this page is there for a reason, and it says "partition."  What it doesn't say is why, and that's really disappointing and concerning.  It's a little late in the game, but I would prefer Apple walk like a man and lay it on the line; the good, the bad, and the ugly.
    I'm glad to hear Lion is working okay for you!

  • Installation Best Practice

    Hi Guys
    I just bought the Master Collection upgrade from CS5 to CS 5.5 (as part of the deal for auto upgrade to CS6). Are there any best practices I should observe for installing the update to make sure it doesn't gum up or fall over or whatever (or should I just whack the disc in and let it run)???
    We're running Win7 SP1 64bit.
    Regards,
    Graham

    Also, as Steve mentioned above, you can Turn off UAC, anti-virus program just to be sure that any of your security settings won't interrupt the application.
    Here is a quick guide to Restart Windows in a modified mode | Windows 7, Vista -
    http://helpx.adobe.com/x-productkb/global/restart-windows-modified-mode-windows.html
    Enjoy!

  • Client on Server installation best practice

    Hi all,
    I wonder on this subject, searched and found nothing relevant, so I ask here :
    Is there any best practice/state of the art when you have a client application installed on the same machine as the database ?
    I know the client app use the server binaries, but must I avoid it ?
    Should I install a Oracle client home and parameter the client app to use the client libraries ?
    In 11g there is no more changeperm.sh anymore, doest-il prove Oracle agrees to have client apps using server libraries ?
    Precision : I'm on AIX 6 (or 7) + Oracle 11g.
    Client app will be an ETL tool - explaining why it is running on DB machine.

    GReboute wrote:
    EdStevens wrote:
    Given the premise "+*when*+ you have a client application installed on the same machine as the database", I'd say you are already violating "best practice".So I deduce from what you wrote, that you're absolutely against coexisting client app and DB server, which I understand, and usually agree.Then you deduce incorrectly. I'm not saying there can't be a justifiable reason for having the app live on the same box, but as a general rule, it should be avoided. It is generally not considered "best practice".
    But in my case, should I load or extract 100s millions of rows, then GB flow through the network, with possible disconnection issues, although I could have done it locally ?Your potentially extenuating circumstances were not revealed until this architecture was questioned. We can only respond to what we see.
    The answer I'm seeking is a bit more elaborate than "shouldn't do that".
    By the way, CPU or Memory resources shouldn't be an issue, as we are running on a strong P780.

  • Data Warehouse using MSSQL - SSIS : Installation best practices

    Hi All,
              I am working on a MSSQL - 2008 R2 , based data warehouse building.  The requirement is to read source data from files, put it in stage database and perform data cleansing etc .. and then move the
    data to data warehouse db .. Now the question is about the required number of physical servers and in which server which component (MSSQL , SSIS ) of MSSQL should be installed based on any best practices:
    Store Source files --> Stage database --> data warehouse db
    The data volumne will be high ( per day 20 - 30 k transactions) ... Please suggest
    Thank you
    MSSQL.Arc

    Microsoft documentation: "Use a Reference Architecture to Build An Optimal Warehouse
    Microsoft SQL Server 2012 Fast Track is a reference architecture data warehouse solution giving you a step-by-step guide to build a balanced hardware...Microsoft SQL Server 2012 Fast Track is a reference architecture
    data warehouse solution giving you a step-by-step guide to build a balanced hardware configuration and the exact software setup. 
    Step-by-step instructions on what hardware to buy and how to put the server together.
    Setup instructions  on installing the software and all the specific settings to configure.
    Pre-certified by Microsoft and industry hardware partners for the most optimal hardware and software configuration."
    LINK:
    https://www.microsoft.com/en-us/sqlserver/solutions-technologies/data-warehousing/reference-architecture.aspx
    Kalman Toth Database & OLAP Architect
    IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Wireless authentication network design questions... best practices... etc...

    Working on a wireless deployment for a client... wanted to get updated on what the latest best practices are for enterprise wireless.
    Right now, I've got the corporate SSID integeatred with AD authentication on the back end via RADIUS.
    Would like to implement certificates in addition to the user based authentcation so we have some level of dual factor authentcation.
    If a machine is lost, I don't want a certificate to allow an unauthorized user access to a wireless network.  I also don't want poorly managed AD credentials (written on a sticky note, for example) opening up the network to an unathorized user either... is it possible to do an AND condition, so that both are required to get access to a wireless network?

    There really isn't a true two factor authentication you can just do with radius unless its ISE and your doing EAP Chaining.  One way that is a workaround and works with ACS or ISE is to use "Was machine authenticated".  This again only works for Domain Computers.  How Microsoft works:) is you have a setting for user or computer... this does not mean user AND computer.  So when a windows machine boots up, it will sen its system name first and then the user credentials.  System name or machine authentication only happens once and that is during the boot up.  User happens every time there is a full authentication that has to happen.
    Check out these threads and it explains it pretty well.
    https://supportforums.cisco.com/message/3525085#3525085
    https://supportforums.cisco.com/thread/2166573
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • Re-installation "best practices"?

    well..... Bridge has gone bad..... and I find that I need to reinstall the entire suite to install Bridge..... (feel free to fill in the blanks)
    Does a new installation uninstall before installing?  are there any best practices for a do-over?
    thanks

    sorry, it's CS5
    The problem that I'm having is with Bridge and unfortunately the recommendation is to uninstall first but Bridge seems to be one of the components without an uninstaller, or individual installer. So after reinstalling I seem to have the same problems. I posted here in the PS forum because it seems that the installer is linked to the PS install.
    regards.

  • Questions VLAN design best practices

    As per best practices for VLAN design:
    1) Avoid using VLAN 1 as the “blackhole” for all unused ports.
    2) In the local VLANs model, avoid VTP (use transparent mode).
    Point 1
    In a big network, I'm having VLAN 1 as the blackhole VLAN. I'd like to confirm that, even if we're not complying with best practices, we're still doing fine.
    a) all trunk ports on all switches have the allowed vlans explicitly assigned.
    b) about all ports on all switches are assigned to specific data/voice vlans, even if shutted down
    c) the remaining ports (some unused sfp ports for example) are shutted down
    d) we always tag the native vlan (vlan dot1q tag native)
    So, no data is flowing anywhere on VLAN 1. In our situation, it is safe to use VLAN 1 as blackhole VLAN?
    Point 2
    Event if we're using local VLANs model, we have VTP in place. What are the reasons of the best practice? As already said, we allow only specific VLANs on trunk ports (it's part of our network policy), so we do not have undesired layer 2 loops to deal with.
    Any thoughs?
    Bye
    Dario

    We are currently using VTP version 3 and migrating from Rapid-PVST to MST.
    The main reason for having VTP in place (at least for use) is to have the ability to assign ports to the correct VLAN in each site simply looking at the propagated VLAN database and to manage that database centrally.
    We also avoid using the same VLAN ID at two different sites.
    However, I did find something to look deeped: with MST and VTP, a remote switch can be root for a VLAN it doesn't even use or as active ports into, and this doesn't feel right.
    An example:
    1) switch1 and switch528 share a link with allowed vlan 100
    2) switch1 is the root for instances 0 and 1
    4) VLAN 100 is assigned to instance 1
    5) VLAN 528 is not assigned to any particular instance so it goes under instance 0
    6) VLAN 528 is the Local Data LAN for switch528 (switch501 has VLAN 501)
    swtich528#sh spanning-tree vlan 528
    MST0
      Spanning tree enabled protocol mstp
      Root ID    Priority    24576
                 Address     1c6a.7a7c.af80
                 Cost        0
                 Port        25 (GigabitEthernet1/1)
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
      Bridge ID  Priority    32768  (priority 32768 sys-id-ext 0)
                 Address     1cde.a7f8.4380
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
    Interface           Role Sts Cost      Prio.Nbr Type
    Gi0/1               Desg FWD 20000     128.1    P2p Bound(PVST)
    Gi0/2               Desg FWD 20000     128.2    P2p Edge
    Gi0/3               Desg FWD 200000    128.3    P2p Edge
    Gi0/4               Desg FWD 200000    128.4    P2p
    Gi0/5               Desg FWD 20000     128.5    P2p Edge
    switch1#sh spanning-tree vlan 501
    MST0
      Spanning tree enabled protocol mstp
      Root ID    Priority    24576
                 Address     1c6a.7a7c.af80
                 This bridge is the root
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
      Bridge ID  Priority    24576  (priority 24576 sys-id-ext 0)
                 Address     1c6a.7a7c.af80
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
    Interface           Role Sts Cost      Prio.Nbr Type
    Should I worry about this?

  • Question on ESB Best Practice

    Hi,
    I would like to know what is the best practice for following scenario.
    1) I have to call different web services based on message content through ESB
    2) i have two ways, either i have one ESB developed and based on content i route them to different web services or create all different ESB services for different web services.
    Can any one tell me, what would be best with respect to performance and all perspective.
    Thanks,
    Jack

    I don't thin that I'm experienced enough but I guess that it depends to many things.
    - First about logic
    Where you want to place logic and how to manage it.
    You can place logic (routing to diferent endpoints) into BPEL but there is hard to manage it. When you put routing logic in ESB you can manage it without redeploying BPEL. It is easier update ESB routing rules than BPEL processes.
    - Performance
    I don't think that one ESB instead of 3 ESB is bottleneck but it is about performance.
    You can even create separate subprocess to keep routing logic. But I think it is the same as 1 ESB.

  • PI UDS - Installation Best practices

    Hi all:
    Per the installation guide the only requirement is to have the PI API installed on the same box as the UDS.
    I have the following questions on installing the UDS Framework & PI UDS. Appreciate your answers:
    1) Can this be installed on the xMII server with along with the PI API/SDK?
    2) Can this be installed on any machine with the PI API/SDK installed?
    3) What operating systems does it work with and Is a server class machine a requirement?
    4) If the only option is to install this on the PI server, are there memory or CPU requirements to runs this?
    5) What is the expected network data packet size and frequency? I’m assuming this is a background task that continually runs.

    Srinivasan,
    1.) I would highly recommend the native connections vs OLEDB is every case where you can get to the data you require from the native xMII UDS. The only reasons you would favor the OLEDB UDS is in cases where you can not retrieve the data from the native xMII UDS.
    As for the instabilities, we feel that we have resolved all the issues in the upcoming release, but if you still find a situation where the xMII UDS fails, please send in a support case and the appropriate fixes will be done.
    The current known issues with the 4.0.2.5 xMII OLEDB UDS are as follows.
    - Unicode characters are not always correctly handled.
    - Administrative modes are not accessible.
    - Other smaller minor fixes.
    - The connection string dialog is populated and encoded correctly.
    2) Currently the new xMII UDSs are in Acceptance Testing and there is no access to the xMII OLEDB UDS. We are all trying very hard to get this through the process as quickly as possible, but with testing resource issues, it may still take some time.
    Martin.

  • Question about Mobilink best practice

    Hello,
    I have the following data workflow:
    Around 20 tables synchronize upload only.
    7 tables synchronize download only.
    2 tables have bidirectional sync.
    I was wondering if it could be a good idea to create 3 schema models, instead of one.
    This way, the upload, which is critical, could run independently of the download.
    Please, tell me if this is a good practice.
    Thank you
    Arcady

    Hi Arcady,
    No, you cannot run multiple instances of dbmlsync against the same SQL Anywhere database concurrently. If you try this, you will see the error:
    SQL statement failed: (-782) Cannot register 'sybase.asa.dbmlsync' since another exclusive instance is running
    dbmlsync client accesses must be serialized against the same database.
    I was wondering if it could be a good idea to create 3 schema models, instead of one.
    This way, the upload, which is critical, could run independently of the download.
    See: DocCommentXchange - Upload-only and download-only synchronizations
    It's up to you and what you really prefer to manage - you can do all of the work in one model (and create synchronization scripts that do the "right" work), or you can create three separate models and synchronize them separately. If you're "more concerned" (i.e. want to synchronize more often) about the upload-only tables then you can create a separate model and use dbmlsync -uo or the UploadOnly (uo) extended option for that specific model.
    A reminder that if you do end up splitting your one model into multiple models, all of the models have to be kept in synchronization with the MobiLink synchronization server in order for dbmlsync to advance the remote database transaction log truncation offset.
    (i.e. in order for delete_old_logs to continue to work and removing offline logs, all of the logs must be synchronized for all synchronziation subscriptions).
    Regards,
    Jeff Albion
    SAP Active Global Support

  • Installation - Best Practices

    Howdy everybody,
    I want to know if is recommended that i install Oracle Database (XE, Enterprise, Standard) using oracle-validated, yum install (from internet) or using packages from DVD - Media ?
    if DVD - Media, they are can send me the commands necessary to install?
    for example:
    for Oracle XE 11gR2:
    rpm -Uvh libaio.1234.rpm
    rpm -Uvh unixodbc.1234.rpm
    what is command full to install?
    and about Enterprise and Express, is necessary that I install all packages from dvd for Oracle Express? or only Enterprise?
    Thanks everybody!
    Att,
    Lucas
    Edited by: 1005247 on 18/05/2013 07:08

    1005247 wrote:
    all right... but there are many people that said is not good to use from yum install (internet), because if you install from internet, many packages is not necessary to oracle....
    what you think?
    What do I think?
    http://www.youtube.com/watch?v=bufTna0WArc
    The 'oracle-validated' package is actually just a specification of packages that are needed to support oracle. It will not download anything that is not needed to support oracle. If you would rather do rpm package installs from the distribution media, and thus enter into 'package dependency hell' -- well, knock yourself out.
    And about oracle express, is necessary that you install all packages? similar is enterprise?What do the Installation Guides say?

  • Question About CRM Best Practices Configuration Guide...

    In the CRM Connectivity (C71) Configuration Guide, Sections 3.1.2.2.2 and 3.1.2.2.3, it mentions two clients, Cient 000 and the Application Client.  What are these two client?  I assumed Client 000 was my CRM client, but that sounds the same as what the application client should be.
    http://help.sap.com/bp_crmv340/CRM_DE/BBLibrary/Documentation/C71_BB_ConfigGuide_EN_US.doc

    Keith,
    Client 000 is not the application client.
    The client which is used in the middleware(e.g.CRM quality client - R/3 Quality client or CRM Production client-R/3 Production client)is the application client.
    You have to do it once in client 000 and in your own created client which is used in the middleware connectivity.
    regards,
    Bapujee

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Best practice for storing/loading medium to large amounts of data

    I just have a quick question regarding the best medium to store a certain amount of data. Currently in my application I have a Dictionary<char,int> that I've created, that I subsequently populate with hard-coded static values.
    There are about 30 items in this Dictionary, so this isn't presented as much of a problem, even though it does make the code slightly more difficult to read, although I will be adding more data structures in the future with a similar number of items.
    I'm not sure whether it's best practice to hard-code these values in, so my question is, is there a better way to store this information, retrieve and load it at run-time?

    You could use one of the following methods:
    Use the application.config file. Upside is that it is easy to maintain. Downside is a user could edit it manually as its just an xml file.
    You could use a settings file. You can specify where the setting file is persisted including under the user's profile or the application. You could serialize/deserialize your settings to a section in the settings. See
    this MSDN help section
    on details abut the settings.
    Create a .txt, .json, or .xml file (depending on the format you will be deserializing your data) in your project and have it be copied to the output path with each build. The upside is that you could push out new versions in the future of the file without
    having to re-compile your application. Downside is that it could be altered if the user has O/S permissions to that directory.
    If you really do not want anyone to access it and are thinking of pushing out a new application version every time something changes you could create a .txt, .json, .xml file (depending on the format you will be deserializing your data) just like the previous
    step but this time mark it as an embedded resource in your project (you can do this in the properties of the  file in visual studio). It will essentially get compiled in your application. Content retrieval is outlined in
    this how to from Microsoft and then you just deserialize the retrieved content the same as the previous step.
    As far as formats of your data. I recommend you use either XML or JSON or a text file if its just a flat list of items (ie. list of strings). Personally I find JSON much easier to read compared to XML and change and there are plenty of supported serializers
    out there. XML is great too if you need to be strict as to what the schema is.
    Mark as answer or vote as helpful if you find it useful | Igor

Maybe you are looking for