Any point to up-to-date if I own multiple Macs?

I bought a new MacBook Pro which qualifies for a free upgrade to Mountain Lion. Will I still have to buy Mountain Lion to install on my other Mac and if so is there any point to using the up-to-date offer?

The Apple Support Communities are an international user to user technical support forum. As a man from Mexico my first language is Spanish. I do not speak English, however I do write in English with the aid of the Mac OS X spelling and grammar checks. I also live in a culture perhaps very very different from your own. When offering advice in the ASC, my comments are not meant to be anything more than helpful and certainly not to be taken as insults.
You should be able to install it on all Macs that you own or control. After you have downloaded the Lion installer you have a number of options. If you search the internet you will find instructions that help you create a portable Lion installation drive using a USB thumb drive. Or you can just copy the installer across your LAN to any other Macs that are capable of running Lion. Finally, you could sign into the Mac App Store on the other Macs and download the LIon installer from the Purchases pane.
The thing to keep in mind is that if you want to copy the installer, do so before using it to install Lion. The installer will delete itself after it complete the installation of Lion and you would need to download it again to install it on another Mac.

Similar Messages

  • Trim, SSD, and Encryption--Any point to urandom first?

    If I am setting up an encrypted system, on a SSD, with Trim enabled, is there any point to writing random data to the drive first (with urandom) as one would normally do?
    My understanding is that Trim will write zeroes to unused portions of the drive anyway, to keep track of what space is available for optimizing writes to the drive (for wear leveling, etc.). So if in the long run, what the drive will look like is random information, where data is present, and zeros where there is no data. Then why not just zero out the drive to begin with, rather than using urandom?
    In fact, would zeroing the drive to start help Trim from the beginning do what it's supposed to do?
    Thanks for any thoughts.

    Yes, the point of overwriting the entire disk with random data, before setting up an encrypted system, is so that it is not possible to see where there is data and where there is just empty space on an encrypted drive. This is recommended by every encryption setup guide I have ever seen, including the Arch Wiki.
    I'm not sure where I created the impression that I was asking a question about identifying bad blocks. I didn't mention them in my post and I don't know what it has to do with setting up an encrypted system (on an SSD with the Trim feature), which is the topic of my question.
    The issue is, although it is normal to first overwrite the entire disk with random data, for an encrypted system, if you enable the Trim feature on an SSD it will write zeros to free space (as files are deleted, I believe). This helps the SSD do wear leveling and (I think) operate faster, issues exclusive to SSDs as opposed to regular magnetic hard drives. So once you enable the Trim feature, you will, at least (I think) in the long run, defeat the benefit of having written random data to the entire drive when the system was set up.
    A lengthy explanation can be found here, for anyone interested: www.asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html
    My question is, will writing random data not only defeat the security/encryption benefits, but also make it hard for the SSD to operate the Trim function effectively? Will the SSD see all the random data as used space and not perform effective wear leveling? In which case, is it better to just zero out the drive first and then create the encrypted system, so that the SSD can use Trim properly?
    Thanks to anyone who knows about how Trim with SSDs works and how random data could effect it.
    Last edited by cb474 (2012-04-12 00:44:29)

  • Is there any way too see all data on all enteties an user owns.

    Hi all,
    Is there any way to see all data an user owns in CRM2011?
    Maybe an custom report or view?

    I could only think of one solution to this, a little bit of code to get a list of records a user owns. The List contains Entity objects with the Id & LogicalName. You could do this with a report too but you'll have to query every entity...
    public List<Entity> GetOwningRecords(IOrganizationService service, Guid systemUserId)
    List<Entity> owningRecords = new List<Entity>();
    RetrieveAllEntitiesRequest retrieveAllEntitiesRequest = new RetrieveAllEntitiesRequest();
    retrieveAllEntitiesRequest.EntityFilters = EntityFilters.Entity;
    retrieveAllEntitiesRequest.RetrieveAsIfPublished = true;
    RetrieveAllEntitiesResponse retrieveAllEntitiesResponse = (RetrieveAllEntitiesResponse)service.Execute(retrieveAllEntitiesRequest);
    foreach (EntityMetadata entityMetadata in retrieveAllEntitiesResponse.EntityMetadata)
    if (entityMetadata.OwnershipType != OwnershipTypes.UserOwned)
    continue;
    try
    QueryExpression query = new QueryExpression(entityMetadata.LogicalName)
    ColumnSet = new ColumnSet("ownerid"),
    Criteria =
    Conditions =
    new ConditionExpression("ownerid",ConditionOperator.Equal, systemUserId)
    EntityCollection result = service.RetrieveMultiple(query);
    foreach (Entity entity in result.Entities)
    owningRecords.Add(entity);
    catch (FaultException<OrganizationServiceFault>)
    Console.WriteLine(String.Format("Couldn't retrieve the records for {0}", entityMetadata.LogicalName));
    catch (Exception)
    Console.WriteLine(String.Format("Unexpected error for {0}", entityMetadata.LogicalName));
    return owningRecords;
    Hope this helps

  • A "designer" file being worked on a machine, is checked-in regularly, if at any point of time the latest version of that file is taken on the same machine, all the data gets deleted

    A “designer” file being worked on a machine, is checked-in regularly, if at any point of time the latest version of that file is taken on the same machine, all the data gets deleted

    Hi,
    Could you provide us more information to help you?
    If you have resolved the issue, it would be better if you can post the solution here, which will help others.
    Thanks,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Query to bring 6 rolling pay check date at any point of time

    Hi,
    we have a date dimension dim_date which is having calendar date. I need to write a query which will always bring 6 rolling pay check dates ant any point of time. there is 14 days gap between each pay check date and it is fall on friday. please take below example as a reference and help me to write query as per my requirement. thanks in advance.
    pay check date example:
    3/23/2012
    3/09/2012
    2/24/2012
    2/10/2012
    1/27/2012
    Thanks
    Jay.

    The to_char function with 'D' mask returns the day of the week:
    Sunday is 1, we need to sum 6-mod(1,7)=5 for next friday
    Monday is 2, we need to sum 6-mod(2,7)=4 for next friday
    Tuesday is 3, we need to sum 6-mod(3,7)=3 for next friday
    Wednesday is 4, we need to sum 6-mod(4,7)=2 for next friday
    Thursday is 5, we need to sum 6-mod(5,7)=1 for next friday
    Friday is 6, we need to sum 6-mod(6,7)=0 for next friday (well, current or next friday)
    Saturday is 7, we need to sum 6-mod(7,7)=6 for next friday
    So, we always need to sum 6-mod(to_char(sysdate, 'D'), 7) to found the current or next friday:
    SQL> select to_char(sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') next_friday from dual;
    NEXT_FRIDAY
    2012-02-17
    And, adding other 14 days for each next rolling pay chack, we have:
    SQL> select to_char(sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') rolling_dates from dual
    2 union all
    3 select to_char(14+sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') rolling_dates from dual
    4 union all
    5 select to_char(14+14+sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') rolling_dates from dual
    6 union all
    7 select to_char(14+14+14+sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') rolling_dates from dual
    8 union all
    9 select to_char(14+14+14+14+sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') rolling_dates from dual
    10 union all
    11 select to_char(14+14+14+14+14+sysdate+6-mod(to_char(sysdate, 'D'), 7), 'yyyy-mm-dd') rolling_dates from dual;
    ROLLING_DA
    2012-02-17
    2012-03-02
    2012-03-16
    2012-03-30
    2012-04-13
    2012-04-27
    6 rows selected.
    Please check the results in your instance, because week day numbers are NLS dependents
    I hope this helps
    Best Regards
    Alfonso Vicente
    [http://www.logos.com.uy/el_blog_de_alfonso]

  • Balance Sheet translation at spot rate at ANY point in time

    My client produces its financial accounts in AUD.  They will have open items in AR, AP and Bank in foreign currencies.  WITHOUT running periodic valuation they wish to produce a Balance Sheet at any point in time during the month applying the applicable daily spot rate to valuate the Foreign Currency open items (for reporting purposes only).  I have run S_ALR_87012284 and maintained the special evaluations tab for Display Currency (AUD), Key date for translation (current date) and exchange rate type (spot rate type).  However, this does not appear to be revaluating the open items in the subledger accounts to produce the balance sheet at the current spot rate.  Appreciate input / alternate approach.  Cheers, Dean.

    Hi Chirag
    As per Oracle following are the rules that has to be followed for translation.
    1. For Balance Sheet Accounts (Asset & Liabilities) GL as a default uses the YTD rule.
    2. For P&L Accounts you can choose between YTD and PTD rule. So in your case you can use the default PTD Rule. (Profile Option 'GL Translation: Revenue/Expense Translation Rule')
    YTD Rule = (Translated Period Amount = Period-End Rate x YTD Ledger Currency Balance - Beginning Translated Balance).
    PTD Rule = (Translated Period Amount = Period Average Rate x PTD Ledger Currency Balance)
    Hope this helps.
    Regards,
    Gautam
    Edited by: Gahlout on Sep 25, 2012 11:04 PM

  • Migration status at any point of time

    Hello to All,
    I am running a migration scripts (which are actually a procedures) to pick the data from the source table to target
    table.
    My requirement is to find out the migration status at any point of time.Is there any way to do it?
    Please help.
    Best regards,
    Pavan

    Hi Pavan,
    Does your source table have columns to store the status, error message?
    In general, stage tables should have a status and error message column which are updated by the migration script with the status of that particular record.
    You can check these columns to get the status.
    For eg
    select status, error_message, count(*)
    from you_table
    group by status, error_message;The above script will give the number of records Successfully processed, errored during migration.
    Regards
    Imran

  • Shipping point and delivery creation date

    Hi :
    im selecting sales orders , items and schedule line
    from a custom table based on plant, shipping point and delivery creation date. 
    Is there any relation between Schedule lines and delivery creation date.
    For delivery creation date d_date = SY-DATUM,  and should select Mat Avail dt MBDAT or Trans. planning dt TDDAT which ever comes earlier.
    vbep-edatu = itab-d_date.
    select a1belnr a1posnr vbep~etenr into corresponding fields of table itab from a1 inner join vbap
    on a1belnr = vbapvbeln
    inner join vbep on a1posnr = vbepposnr
    where a1~werks = p_werks
    AND vbap~vstel = p_vstel
    AND vbep~edatu = p_edatu
    AND (vbep~mbdat <= p_edatu OR
    vbep~tddat <= p_edatu ).
    can anyone help me with this select statement.
    Thanks.
    Raghu

    Hi,
    Try this:
    select a1belnr a1posnr vbep~etenr into corresponding fields of table itab from a1 inner join vbap
    on a1belnr = vbapvbeln
    inner join vbep
    <b>on vbapvbeln = vbepvbeln and vbapposnr = vbepposnr</b>
    where a1~werks = p_werks
    AND vbap~vstel = p_vstel
    AND vbep~edatu = p_edatu
    AND (vbep~mbdat <= p_edatu OR
    vbep~tddat <= p_edatu ).
    regards,
    Anji

  • Is there any point in a G4 render farm?

    Hi all.
    I've been getting quite intrested in the idea of setting up a very small render farm laterly as I have the option of getting some old G4s for free.
    I currently have a 2.0Ghz DP G5 as my main mac running FCP2 and Shake and was already grabbing a G4 466 digital audio to set up as a file/print and back up server running tiger server, a postscript RIP and retrospect. I have the possibility of getting atleast one more G4 466 DA and maybe a couple of G4 400 AGPs with Gigbit ethernet cards. As the main use would be farming out DV to mpg2 compression via compressor and shake renders (though not large shake stuff to begin with as very new to this) is there any point in looking into this or am I going to need to lots of G4s to make a real improvement in render times.
    Also is RAM a major consideration. For example if I got 4 G4s with say 512Mb ram would it be better runing all four or using three but nicking the fours ram to give more ram in the three machine set up?
    Any advice would be appreciated
    Cheers
    Steve

    I vote no. G4 isn't the problem so much as 400mhz is. The ram shouldn't be a problem.
    But you could test if you set up Compressor on one G4. Compress a job. Compress the same job with the same settings on the G5.
    If the G5 is 4 times (or even 3, since time is spent sending the render data over the network between the G4's slow system busses) as fast the single G4, a render farm will not help you.
    Using the G4 and G5 together in the same cluster does not work because Compressor looks at all processors and divides the job into twice as many segments. In this case, the G5 finishes it's two segments years before the G4's and then sits around waiting. You could try splitting the G5 into 4 instances, but then you have 8 slow processors attacking your job. Not very efficient.
    Good luck though. You should test the one G4 just so you know. Report back if you do.

  • How do I verify that I don't last any point during acquisition?

    Hello,
    I need to make a continuous data acqusition using NI PXIe 5162 card it's why I wonder if there is a property node in labview or a program that shows if I lost any point during the acqusition??
    I really need the answer !! thank you
    Solved!
    Go to Solution.

    Hajar8,
    When you start an acquisition, all the samples are stored into onboard memory in a circular buffer.  You then use "niScope Fetch" to request the data from memory, and the samples are sent via DMA to the host and to your application.  Since the buffer is a circular buffer, if you don't fetch the records fast enough (or the sample rate is too fast), then its possible for the acquisition to fill up the entire onboard memory, and then begin overwriting the old samples.  No error is thrown when this occurs when using the PXIe-5162.  Instead, when you attempt to fetch data that has been overwritten, the "niScope Fetch" VI will return the following error:
    "Error -107411863 occurred at niScope Fetch....
    Possible reason(s):
    The requested data has been overwritten in memory so it is no longer available for fetching."
    I hope this helps.
    -Nathan
    Product Support Engineer
    National Instruments

  • Any ideas on a release date?

    hi all would like to use but not going to hose my comp with a beta any ideas on a release date for this has apple said at all.

    Yeah, but it would be about time for a 3.0.3 release.
      Windows XP Pro   Safari 3.0.2

  • One of the folders on my external hard drive has transformed into a unix executable file and I can no longer access my files. Is there any way to save the data?

    One of the folders on my external hard drive has transformed into a unix executable file and I can no longer access my files. Is there any way to save the data?

    Wow, have seen Files do that, but a whole Folder as I recall!
    Could be many things, we should start with this...
    "Try Disk Utility
    1. Insert the Mac OS X Install disc, then restart the computer while holding the C key.
    2. When your computer finishes starting up from the disc, choose Disk Utility from the Installer menu. (In Mac OS X 10.4 or later, you must select your language first.)
    Important: Do not click Continue in the first screen of the Installer. If you do, you must restart from the disc again to access Disk Utility.
    3. Click the First Aid tab.
    4. Select your Mac OS X volume.
    5. Click Repair. Disk Utility checks and repairs the disk."
    http://docs.info.apple.com/article.html?artnum=106214
    Then try a Safe Boot, (holding Shift key down at bootup), run Disk Utility in Applications>Utilities, then highlight your drive, click on Repair Permissions, reboot when it completes.
    (Safe boot may stay on the gray radian for a long time, let it go, it's trying to repair the Hard Drive.)

  • If meta data LUN is totaly destroied , is there any way to rebuild the data from the video LUN

    If meta data LUN is totaly destroied , is there any way to rebuild the data from the video LUN ????

    What happened to your metadata? Are you sure the metadata is gone?
    In theory, file carving could be attempted on the data LUNs, but you would need a tool that understands how Xsan lays out data across LUNs in a stripe group and hope your fragmentation isn't too bad. I doubt such a tool exists, but if one does it almost certainly is not publicly available. You should probably contact a data recovery service with experience recovering data from Xsan. Do that before you do anything else if this data is really important.

  • Is there any way to retrive the data from my USB Key

    Hello,
    I have a usb key I have been using on a Win 7 desktop computer and a Mac Book Air to put in Data on a spread sheet.
    I interchange to put in the data between the computer and my MBA without any problem.
    I am today not able to open the usb memory key in either-Computer or the MAC Book Air.
    Does that mean I have a coruppted USB memory key and the data lost for ever?
    I have just put the USB memory key in my computer and it says File System CDFS.
    I put it in MAC Book Air and it would not open either!!
    Is there any way to access my data. I have no back up...

    If the USB drive won't mount on either computer, it's probable that the drive has failed, in which case it's unlikely that you can recover any data from it. You can do a web search for something like "flash drive file recovery" and you'll find a number of utilities that purport to do file recovery, but all the utilities I'm familiar with require that the drive at least be mountable. There may be services that could attempt recovery, but they tend to be expensive.
    The moral of the story is: never have only a single copy of a critical document, and if you must, never, never have your single copy only on a flash drive (or floppy, for those "old timers" who still use floppies).
    Regards.

  • Is there any way to print  the data inside  the Notes field of MIR6 Report

    Hello Gurús.
    We need to include the data inside the Notes field in the report MIR6 - INVOICE OVERVIEW - report.
    Is there any way to print  the data inside (comments)  the Notes field as well in the Report  ?
    We found that the only way is to open the Notes and print it, but it takes time, any idea ?
    Rgds.
    MCM.

    There's nothing built-in that does that. If you only have text fields and they don't have any formatting or other property that would prevent it (e.g., Date, character limit), you can run a simple script to populate each field with the field name, and then print. A more complicated approach would be a script that adds text annotations near/over each field that shows the field name. This would just be for documentation purposes, but it's possible. Post again if you'd like help with the first. You'll probably have to pay someone for the second approach if you don't want to do it yourself.

Maybe you are looking for

  • External 20" Display on the MacBook

    Hi! I'm thinking about buying an external display for my MB, sadly a Cinema Display is not within budget. I was looking at a Samsung 205BW with a reslution of 1680x1050, but somewhere I read it doesn't work on the MacBook connected through DVI. Is an

  • How to control the permission for reports in share folder?

    Hi Experts, In OBIEE 11.1.1.6.0. I have created two folders in share folder,one is sales folder which contains some sales reports,and the other is dashboard folder which contains some dashboard pages that have these sales reports. So I want to new us

  • Get Information About Open Windows

    Hi everyone!!!! I'm trying to make some kind of bot, the primary Idea is to automatize tasks. To do this, it would be really usefull to have information about the windows that are open, for example, if I open the notepad, It has a title, width, heigh

  • SM  Wily Introscope Version 8 doc question

    We also installed Wily Introscope Version 8, however there was a part mention as below The Solution Manager Diagnostics provides an application that performs the setup of the Introscope bytecode agent for Java automatically. This section explains the

  • Remotely connecting to oracle linux on vmware from putty

    Hi, I have installed Oracle Linux on Vmware workstation. The Host is Windows 8. From the Host, I am able to connect via putty. But from another laptop, I am unable to connect via putty. Both, the laptop and the Host PC are on the same wifi. Can anyon