Re-Installing PC, how to cope with cloud installation

Hi, I need to re-install my Lenovo WS (WIN8.1). Back in the pre-cloud days I had simply to de-activate my Adobe software and to re-activate after operating system installation. How does that work with my cloud apps? What is the process today?
Since this is more or less a common issue I was looking for some standard support database entry, but I was not able to find something of help.
Thanks,
Sven

it's easier.  you can sign out before wiping your pc, but you don't need to.  with cc, you can even sign out of old 'dead' computers using a new computer.
just do whatever you need to do and when ready to install cc, start downloading a cc app (after signing in with your subscriber adobe id) and the desktop app will install, https://creative.adobe.com/

Similar Messages

  • How to cope with a dead pixel

    Yes, I know, there's no fix for a dead pixel (is there?), and there are lots of maybe-fixes for a "stuck" pixel. Well, I have a dead pixel, super-dead, in a really terrible place (the lower right quadrant, just a tad above where, say, the iTunes ministore would be). I'm stuck with it. I can't do anything to get it fixed or replaced (can I?), so my question to the community is about dealing with it. How does one cope with something so aggravating? Are there some tweaks one can do to make, say, white web pages reverse their colors so the pixel is not as apparent? Are there other methods to get rid of it or make it less obvious? I'm going mad. Someone help. <<br>
    Macbook 2ghz   Mac OS X (10.4.7)   1GB RAM 120GB HDD

    Well , I do understand how crazy it can drive you when a pixel is dead. Everything was fine on my MB then about a month in I noticed a pixel that was dead, I too looked at every topic I could find on the subject, nothing helped. So I took my MB into my local Apple Store, they sent it off to repair. Of course Apple did not fix because "One pixel is not considered enough of a problem to repair". So the Apple Store actually ordered me a new screen and replaced it for free.
    I was very happy until I turned on the computer at the A.S. and once again it had a dead pixel in a different spot. they said that there was nothing they could do about it. They would not replace it again because it would screw up their store budget. So I'm learning to deal with it. Believe me it really *****, I'm a perfectionist at it really bugs me to see this flaw. Other than that my MB is great and I love it. Good Luck!

  • Having trouble installing photoshop or lightroom through creative cloud installer

    My correct password is not being accepted by creative cloud install and not letting me download photoshop or lightroom free trial. Please can anyone help?

    Was meant to read creative cloud installer not install

  • How to cope with Out of Memory Errors

    Hi, I am distributing a desktop application to the general public written in Java 1.4.2. How much memory is required is approximately propertional to how many files they load so Ive tried to pre-emtp OutOfMemoryErrors by checking when memory usage is 95% and preventing the loading of additional files above this level with the following code:
    protected static void checkMemory()
            throws LowMemoryException
            if (Runtime.getRuntime().freeMemory() < Runtime.getRuntime().maxMemory() * 0.05)
                Runtime.getRuntime().gc();
                if (Runtime.getRuntime().freeMemory() < Runtime.getRuntime().maxMemory() * 0.05)
                    MainWindow.logger.severe("Memory low:" + Runtime.getRuntime().freeMemory() / 1024 / 1024 + "MB");
                    throw new LowMemoryException("Running out of memory");
        }but this code is not robust, sometimes users reports LowMemoryException when the user has only loaded a few files.
    I tried removing this code, but then user can get an OutOfMemory error whcih can cause problems with whatever code was running at the time and leave the application in an inconsistent state, if I just exit the application immediately it would be very annoying for users that are int he middle of something.
    I also have adjusted the -Xms and -Xmx settings but cannot decide on a suitable default.
    What I would ideally like the application to do is to extend its heap space as required upto the limits of the users machine, and if it reaches memory limits handle the lack of memory in a releiable manner allowing the user to continue using the application in a safe way

    Unfortunately the metadata is stored displayed within a JTable, so even if If I had in in a Database I think it would all have to be loaded into memory for display within the JTbale in a timely fashion.
    Anyway I think Ive found the problem with the original code, it was reporting memory low when almost all the allocated memory was being used but had'nt accounted for the fact that maxMemory limit had not been reached so more memory could be allocated.
    I think the correct code is:
    protected static void checkMemory()
            throws LowMemoryException
            if (Runtime.getRuntime().totalMemory()  -  Runtime.getRuntime().freeMemory() > Runtime.getRuntime().maxMemory() * 0.95)
                Runtime.getRuntime().gc();
                if (Runtime.getRuntime().totalMemory()  -  Runtime.getRuntime().freeMemory() >  Runtime.getRuntime().maxMemory() * 0.95)
                    MainWindow.logger.severe("Memory low:" + (Runtime.getRuntime().maxMemory()  - (Runtime.getRuntime().totalMemory()  - Runtime.getRuntime().freeMemory() ))/ 1024 / 1024 + "MB");
                    throw new LowMemoryException("Running out of memory");
        }

  • How to cope with DTP getting inactive.

    Hi,
    When we are doing a system copy from a source system from production to test we have to restore the connection.
    This is followed by a replication. All the DTP's are getting deactivated. How can we prvent this?
    Activating all the DTP's gets us in trouble with the PorcessChains as the DTP ge new technical names..
    Is activating in Development and transporting all DTP's the only solution?
    Any information on this will be appreciated.
    Udo

    Hi.
    Did you follow the note 886102 to perform this system copy ?
    We have some issues with DTPs and BDLS process.
    Please check also if the following notes were applied in your system and then rerun this BDLS
    process afterwards.
    1139924    BI7.0(SP18): System names converted incorrectly
    1142908    70SP18: BDLS does not convert pseudo D versions
    1148403    70SP18: BDLS after early replication
    1149141    70SP18: BDLS: Improvements log
    1169659    Correction: Process variants not entered
    Thanks,
    Walter Oliveira.

  • How to cope with XMLAgg bug 4277241?

    Hi,
    I have hit the bug 4277241 , which was referred to her as well: Re: Important--Bug(ORA-22813) fixes in Oracle Database 10g  Release 10.2.0.4 for my 11.2.0.1 database.
    This is due to the result set being to big.
    Now i have read the metalink note and it says:
    xmlagg() with a GROUP BY can fail with ORA-22813 if the result is too large.
    This is normal and expected as there is a hard coded limit on the result
    size *BUT* this fix allows event 44410 to be set which uses a different method
    to evaluate the XMLAGG(). The event is disabled by default.
    NOTE: If the event is set to work around the 30K limit then LOB leakage
          can occur. This is expected and is due to the way the limit is
          avoided.
    Workaround
      Rewrite the query without using "group by" by using select distinct methodApparently using the event causes a memory leak which is ofcourse undesirable.
    So is the problem caused by using the GROUP BY in conjunction with XMLAgg or just by using the XMLAgg altogether ?
    If the answer is the GROUP BY then how does one rewrite a query without resorting to subqueries ?
    DISTINCT can only be used with the SELECT keyword, so the second XMLAgg used below can not be preceded by an DISTINCT.
    Using subqueries for every nest-level makes the query rather large and complicated , that is the reason for asking.
    The query below can be rewritten using subqueries, and thus avoiding the GROUP BY, but i am more interested in alternatives using DISTINCT or whatever.
    Ofcourse the query below doesn't give the error but does show what i mean:
    select
    xmlagg(
       xmlelement("department",
         XMLAGG(
           XMLElement("employee",
            xmlelement("first-name" , e.first_name),
            xmlelement("last-name", e.last_name)
              order by e.last_name
              d.department_name
            )as "xml"
            from  employees e 
                  join  departments d on (e.department_id = d.department_id)
                  join locations l on ( d.location_id = l.location_id)
            group by d.department_name
        ;

    Hi,
    Could XQuery be an option for you?
    SELECT XMLQuery(
    '<root>
      for $d in fn:collection("oradb:/HR/DEPARTMENTS")/ROW
      return element department {
        attribute name { $d/DEPARTMENT_NAME/text() }
      , for $e in fn:collection("oradb:/HR/EMPLOYEES")/ROW
        where $e/DEPARTMENT_ID = $d/DEPARTMENT_ID
        order by $e/LAST_NAME
        return element employee {
          element first-name { $e/FIRST_NAME/text() }
        , element last-name { $e/LAST_NAME/text() } 
    </root>'
    returning content
    FROM dual;

  • How to cope with surrogate id's after migrating to OWB10gR2

    We are busy converting an OWB 9.2 environment to OWB 10gR2.
    I already had problems with the way dimensions and cubes are used in OWB 10gR2.
    Some of my dimensions will have a steep growth curve because of the level rows that are inserted into the new dimensions. These new rows exist to enable you to link cubes to dimensions at all levels and not only to the lowest level.
    But when building the new mappings surrogate keys will also be used to make the join between dimension and cube but this again causes problems for existing reports.
    In OWB 9.2 we prepared the cubes and dimensions in the staging area and then loaded them into the data warehouse.
    We can still keep the existing columns in the newly defined dimensions we have to add the surrogate keys. But when loading data with the new dimensions and cubes the cubes will refer to the dimension with the new surrogate id’s and not with the old id’s which the business keys are now. So we are losing the relation between cube and dimension based on the business key and we have it now based on the surrogate key.
    But the reports we have (made with Information Builders) are still working with the relationships based on the business id’s but since the foreign keys in the cube refer to the surrogate id’s instead of the bushiness id’s the reports will return without results.
    I know I can resolve it by redesigning the reports but this is a lot of work and not the way we want to go.
    My alternative solutions are
    1.keep the new dimensions
    but rename them
    make views with the old dimension names
    in the views switch the old business id and surrogate
    the reports will then work with the views and the old joins will keep
    on working
    2.do not use dimensions and cubes
    migrate the mappings to use tables , as a matter of fact this is what
    you get after importing a 9.2 MDL into 10.2
    no change in table names
    the reports keep on working
    Alternative solution 1 is rather dangerous because the column names are switched; this makes the model more difficult to understand
    Alternative solutions can become a problem if we want to use slowly changing dimensions, we have to build them as we would have done in OWB 9.2, this leads to complex mappings.
    Are there any other alternatives, can somebody advise for the best answer to these challenges

    Well, couple of ideas here.
    First off, do you HAVE to use levels in the dimension? For some of our dimensions, we just define a single "default" level, and tie all attributes to it. For example, with a time dimension, we'd define a default level, and then tie month_id, month_name, quarter_id, quarter_name, year_id, and year_name to that default level. This works out well - you basically get a "flat" dimension with all attributes on the row, thus preventing the generation of keys for the other levels. You'll still get a DEFAULT_LEVEL_KEY, but it will always equal the dimension key and you can just ignore the column.
    FYI - I've just started playing with OWB 11gR1, and I discovered a few minutes ago that when setting up a dimension level, you don't appear to have to have the surrogate_id attribute turned on. Not sure if this works yet, will let you know.
    Last and not least, I read your problem re: the business keys vs. surrogate keys to be "we used business keys instead of surrogate keys as the dimension PKs and fact table FKs on our initial DW, and now we have to migrate that...where OWB of course wants to use surrogates. Couple of thoughts. First off, I don't think you're going to get any auto-magic way to do this - using business keys instead of surrogate keys is definitely non-standard.
    The one suggestion I would have is to do this: keep the original business IDs on the fact table as their own columns. Set them us as "measures" with a data type of varchar2 or whatever they already are. OWB will still of course create the "proper" FK columns based on the surrogate keys, but your users will still be able to join based on the "old" business id joins. The only real price you'd have to pay for this option is a) it takes more space per row on the fact table, and b) you'll probably have to do some indexing. OWB will build the proper bitmap indexes on the surrogate keys, but you'll probably want to add bitmap indexes on the business ID columns.
    Hope this helps,
    Scott

  • How to cope with "undefined is not an object" ?

    I'm tiptoeing towards a solution, but stepped on a thumbtack:
    The following snippet creates an error in the function GetTplParams
    var template = "DDD-BibFM-tpl.fm";  // located in script-dir var outFile  = "DDD-BibFM.fm";      // located in the source dir   oDoc = OpenTemplate (); alert ("Is template open now?"); function OpenTemplate () {   var tplDir, tplFile, openParams, openReturnParams, oFile;   tplFile = "E:\\_DDDprojects\\FM+EN-escript\\escript\\DDD-BibFM-tpl.fm" // SimpleOpen does not take the tplFile, but opens the dir of the active document //  oFile = SimpleOpen (tplFile, true);   openParams = GetTplParams ();    openReturnParams =  new PropVals();    oFile = Open (tplFile, openParams, openReturnParams);    return oFile; }   function GetTplParams() {  // =>>> "undefined is not an object" on line 22 var params, i;  // Change the params   i = GetPropIndex(params, Constants.FS_RefFileNotFound);    params[i].propVal.ival = Constants.FV_AllowAllRefFilesUnFindable;    i = GetPropIndex(params, Constants.FS_FileIsOldVersion);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_FontNotFoundInDoc);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_LockCantBeReset);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_FileIsInUse);    params[i].propVal.ival = Constants.FV_OpenViewOnly;    i = GetPropIndex(params,Constants.FS_AlertUserAboutFailure);    params[i].propVal.ival=Constants.FV_DoCancel;  /*   i = GetPropIndex(params,Constants.FS_MakeVisible );    params[i].propVal.ival=false;  */   return (params);  }
    (inserting JS code really has its quirks in this editor).

    Thanks Klaus - now it works!
    To Rick.
    The presented piece of code is part of a 're-furbishing' the FameMaker to EndNote 'connection.
    In a first step I collect temporary citations (such as [Daube, 1969, #123] ) from text, footnotes and tables into an array. This part already works fine (I had to postpone further development for a year now...).
    Then I write this data to a new document (which is created from the template - this is where the mentioned piece comes into play.
    This file then is saved as rtf to be worked off by the bibliographic application EndNote (or Citavi in another case) to resolve the temporary citation into formatted citations and the bibliography.
    After that the modified rtf is read in FM and the temp. citations in the FM-document/book are replaced by the formatted citations.
    The user then copies the bibliography (as text only) into his appropriate chapter/section and formats it to his liking.

  • How to cope with the case that a thread crash ?

    if a thread crash, how to notify other threads? Is there standard method to follow?

    Sounds like a job for Observer pattern. You'll have to handle the crash on your own though. Catch any exceptions and notify the observers.

  • How to cope with differences in menu between new and old ATV?

    I am using the same iTunes library but two atv in different rooms.
    Why does the newer atv does not group movies or tv shows by "show".
    Or tv shows show in alphabetical order in the old gen one but random order in the new atv
    All from same iTunes library
    In old (hard disk type) we can browse movies and tv shows in alphabetical order .
    In the new atv depending whether it is a movie or tv show it is either reverse alphabetical order or totally random
    Seasons are grouped in old atv but show as separate shows in new atv?
    Any tips on extra tags in iTunes content that might help?
    I set episode ID season and show numbers
    Use show to group similar movies or show to indicate the tv show
    Also use genre

    I'd updated itunes first. Apple just released an 10.6.1 which address specifically the TV show sorting issue first. regardless Itunes TV show sort remains annoying. It my understanding that that the show will sort properly once in the playlist. However, I'm holding firm at 4.4.4 and wiating for 5.0 update to address the remaining issues with the ATV lastest software.
    from the Apple support page:
    About iTunes 10.6.1
    iTunes 10.6.1 provides a number of improvements, including:
    Fixes several issues that may cause iTunes to unexpectedly quit while playing videos, changing artwork size in Grid view, and syncing photos to devices.
    Addresses an issue where some iTunes interface elements are incorrectly described by VoiceOver and WindowEyes. • Fixes a problem where iTunes may become unresponsive while syncing iPod nano or iPod shuffle.
    Resolves an ordering problem while browsing TV episodes in your iTunes library on Apple TV.
    For information on the security content of this update, please visit: support.apple.com/kb/HT1222

  • How to cope with sytem noise of DAQ?

    I found that my system noise is too much for my data acquisition.
    the frequency range of the noise is so wide and the magnitude is comparable to my useful data.
    Waht can I do to acquire my data?

    What is the noise amplitude, sampling rate, the environment in terms of electrical devices and EMI/RFI, and any means you have implemented to reduce noise (shielding, grounding, cable lenghts and routing, etc). Trying to get some clues to assist. You can always take multiple acquisitions and average the data to effectively remove some effects of noise.
    Do you have an oscilloscope to verify presence and nature of noise?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • If i install new version of Firefox with standalone installer instead of using built in updater, should i uninstall older version first? Would i lose my datas?

    As part of end of year maintenance program, i clean my computer and update softwares. I'm currently with Firefox 4.0.1, and want to upgrade to 8. However, the built in updater downloads horribly slow and demands the browser to be active. Most of the times it stated failure and asks me to download the installer myself.
    I'm okay with this. Firefox has always be my favorite. But before i do 8.0.1's installation, should i uninstall the old one? Or will it be rewritten automatically? Would i lose my bookmarks and extensions this way?
    Win7 64 bit
    p.s: (if someone could tell me how to back up 'Scrapbook' data i'll be so thankful)

    You can install latest version on top of older version,and if you install on top of older version, your bookmark will restored

  • How to change Creative Cloud installation target drive?

    I unwittingly started installing my programs to my C drive when I should have been installing them to the E drive.  It did 8 programs so far and filled up the C drive. How can I tell it to move them to the E drive, and then continue installing the rest of my programs to the E drive?  Thanks in advance.

    Creative Cloud chat support (all Creative Cloud customer service issues)
    http://helpx.adobe.com/x-productkb/global/service-ccm.html

  • Creative Cloud installer sticks with Lightroom 4.1 instead showing and installing 4.3

    What's wrong with the installer? (see topic)
    I had 4.1 installed as separate installation before I switched to the creative cloud.
    Apparently there is at least one installation bug: I removed Lightroom completely by using uninstall. Afterwards installed it via the creative cloud installer. Lightroom gets installed to C:\Programs Files\ where it was installed before, completely ignoring the Install path given in the creative cloud installer (which is C:\Adobe for me), which even makes the installer not activating the "Launch app" link, as the software is not installed where it expects it.
    So that's two things to fix: Lightroom 4.3 not available in creative cloud, install path gets ignored by Lightroom installation. :-(

    Thank you for the information about the custom installation path.  You should be able to install Lightroom 4.1 to the default location and then install Lightroom 4.3.  Lightroom 4.3 will then utilize the licensing information from Lightroom 4.1.
    We are looking at providing an updated version of Lightroom within the Adobe Application Manager in the future.

  • How do you cope with releases of your custom Portal Content (PCD Content)?

    Dear SDN Community,
    I'm very interested in how you cope with release management in the Portal (PCD).
    Where I'm focusing on is the release of a "project" trough your landscape (Dev, Test, QA, Prod)
    When starting a project you start with a new folder lets say "My Custom App" that contains all your content.
    Once done you transport to Test, QA and eventually Prod. Now the Content is in use and roles are assigned.
    Request for changes arive from the customer and eventually a new release need to be implemented.
    Offcourse you could then change/update the content in the initial folder and transport that to Test, QA again but you will not be able to fix incidents while creating the new release because objects maybe touched.
    I was thinking about a scenario where you create subfolders in your application folder, see example below:
    \ "My Custom App"
    |--- Release 1.0
    |--- Release 1.1
    |--- Release 2.0
    |--- Current
    You only do this on your Dev system. Everytime a Release is OK you put it in the "Current" folder and transport this to Prod.
    Using this you won't have to change the Role assignments in your different landscapes as they are always pointing to the "Current" folder. In addition this scenario enables you to provide Pilot rollouts to smaller groups parallel to the Major release.
    My question to you is to shoot on it and give your oppinion/feedback... where possible provide links to SAP document or best practices that describe this subject...
    Thnx in advance,
    Benjamin Houttuin

    Hi Bejamin,
    It is an interesting approach you describe and I see no reason it should not work.
    The only thing which can create some problems is that the object ID of roles must be unique. For example com.mycom.roles.roleA cannot exist in two separate folders in the PCD. For all other PCD objects it is the PCD ID which must be unique and this includes the folder prefix.
    In my experience, the most natural solution for this in large portal environments is to have both a version and a production line.
    The production line will be your standard sandbox, development, test/QA, production and the version line will consist of an development and test/QA systems.
    Long running projects for enhancements to existing functionality will be done in the version line, before being transported to the production line when ready for GoLive. This makes even more sense when you have version lines for backend systems your portal are using which are modified at the same time.
    Whilst the project is running, the production line can be used for changes to the current version, but it is important to note that all these changes have to be migrated into the version line in order to make sure they are not overwritten.
    Of course, a version line brings extra cost related to synchronization and similar, so it is not always it makes sense.
    Amit's two documents of large portal installations might be somewhat relevant:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/bec9711e-0701-0010-e4ac-84f50543bfa9
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/25cab009-0801-0010-1380-edaa1d5e7f85
    Regards
    Dagfinn

Maybe you are looking for

  • Is a Happy Marriage Possible? New Airport Extreme & ADSL Modem/Router

    Is a Happy Marriage Possible? New Airport Extreme & ADSL Modem/Router 1. I’m writing from a rural town in Southern Nepal, very close to India. I’m planning to create a new WI-FI network at a small monastery here.  Ideally, we will have two parallel w

  • Excise Duty Capture

    Hi Gurus, I am getting follwoing error while capturing the Excise Duty credit thru J1IEX "PLA-AT2 amount 27.06.2007 is greater than availabable amount" may I request u 2 help me Thanks in advance.

  • Error message 1602 whilst trying to restore Apple TV

    I have a rapid flashing white light on Apple TV and have seen various comments saying that connecting ATV to my Mac with micro USB and restorign ATV via iTunes will correct the problem. Have tried this but get after trying to verify restore I get fai

  • WSDL of WebService

    Hi, I have a problem with the generated WSDL of a WebService, created in the NDS. First I created a stateless SessionBean in a EJB-module-project (and then a ear-application...). Then I generated with the WebService-wizard a WebService and deployed i

  • What is correct anti-aliasing setting for text for web use?

    The text in question will not be html, just text, mainly Times Roman and black. The graphics I produce should look clear online and I don't know which is best - sharp/crisp/strong - can I get some clarification on this?