Disaster.

I have a big problem with my iPod. I was using it in my friend's car and plugged it into the USB port into the car radio to play music. Before this, it was completely fine. Once I plugged it in, it went haywire. It jumped from song to song and did not stop on one for more than thirty seconds. I tried to push play, but none of the songs would play. It would not play a song, and not remain on one for more than half a minute. The battery is almost dead, perhaps that has something to do with it. By plugging it into the car port, it would have charged it. The next morning (today) I reset the settings on the iPod about ten times. That did nothing to improve it's condition. Now, I am docking it to my computer. It is not being recognized by iTunes, or my computer at all. I changed where the dock was plugged in. I moved the connection to a direct port from the computer rather than the keyboard. Still nothing. It is docked, but not charging. The battery is gradually turning green, but the moment I remove it from the dock, it immediately returns to dangerously red. This is all that I have tried other than making sure my iTunes software is up to date. There is not much I can do now. I intended to restore to factory settings, but due to the inability to sync or even have it be recognized by iTunes or my computer. I am in serious need of help, please, any assistance would be very welcome.

1. There's no easy way to show the full image on your TV. As your friend correctly said, it's a feature of the TV. Probably 99% of all TVs. The part of the image we actually see is called the "TV Safe Area".
iDVD lets us view the (estimated) portion of the image that will be displayed on the TV. In iDVD, choose View > Show TV Safe Area.
The only way I know to fix this problem is to create a QuickTime movie that has a black background, with your movie layered on top of it, reduced in size. That requires QuickTime Pro. Essentially, you create a video sandwich, then you export it to a new movie that you import to iDVD.
It's a bit complicated. There may be instructions posted somewhere. This page at Dan's site describes how to build a sandwich with a still image. It's for an older version of QuickTime Pro but it might help:
http://danslagle.com/mac/iMovie/tips_tricks/6022.shtml
2. About the jaggies you see. There's a bug in iMovie HD that adds "jaggies" -- stair-stepping along sharp edges -- to still images when you do ALL of these:
1. Import a still image to iMovie HD with the Ken Burns Effect checkbox turned OFF.
2. Export the project to iDVD.
3. When iMovie HD asks, you grant permission to render before sending the project to iDVD. If you do, iMovie renders (any un-rendered stills) poorly, adding the jaggies.
To avoid iMovie adding the jaggies, do ANY of the these:
• Turn on the Ken Burns checkbox before importing the images. (Best solution.) iMovie won't ask to render them later, at least not those clips.
• When asked, refuse permission to render the images after you share to iDVD. (This may cause OTHER clips to not be rendered that should be rendered.)
• Drag the iMovie HD project file into the iDVD project window. (This may cause OTHER clips to not be rendered that should be rendered.)
Once the jaggies have been added to a clip it cannot be fixed. It's necessary to re-import the photo.
Karl

Similar Messages

  • A Tale of a Filevault Disaster and Recovery...

    Recently, one of my customers had a major disaster with their MacBook running 10.5.8:
    It all started when they decided they needed to free up some disk space on the internal drive, and thus they began to trash a large number of files to accomplish this.
    Apparently in doing so, they must have trashed some key files because they managed to permanently lock themselves out of their Filevault-encrypted Home folder; the filevault password no longer worked when they attempted to log back in after a restart, and after numerous unsuccessful login attempts, they received the message "your filevault-protected home folder did not open and needs to be repaired. Click OK to repair the folder.." Clicking OK did nothing, and at that point they requested my services.
    Turns out also, they had no recent backup; the last backup was over 2 years old- they had misplaced the power cable for their Time Machine backup drive- a WD MyBook- and had never replaced it... (oye vay)
    When I got the machine, I used the installer cd to create a root user and look at the internal drive's status- 53G of 233G used. Over the course of several days, I took the following actions:
    1. Restarted and logged in as root, I attempted to open the user's sparsebundle with no success.
    2. Restarted to the installer CD and ran Disk Utility's Repair Disk which returned no errors. Attempted to Repair Permissions but got no result appearing to hang..
    3. Figuring "Ok, I can't open the sparsebundle yet, so I'll at least start improving the overall health of the file system"- I started up the MB in Target Disk Mode connected to my G5 Quad running 10.5.7. From my G5, I used DU and Diskwarrior against the MB's single volume internal drive. DU returned no errors but Diskwarrior returned problems in red on a directory rebuild, so I repeated with Diskwarrior until all was green (3x).
    4. After this, tried again to open the user's sparsebundle with same negative result of the password not working.
    5. At this point, running out of solutions but anticipating eventual success (against all odds) I copied the sparsebundle to a spare drive as a backup.
    6. Did a Google search and eventually found an archived Apple discussions thread which seemed to indicate a possible solution:
    http://discussions.apple.com/thread.jspa?messageID=5881960
    To make a long story a bit shorter, what eventually totally saved my bacon was this info in a post by GuyEWhite in the archived thread:
    <<< So Guy, if you're out there- A MILLION Thanks!! >>>
    7. Type: sudo hdiutil attach /Users/username/username.sparseimage -nomount
    8. A bunch of funky things will happen, but it eventually should list a number of volumes, one with an HFS attached to it, note if it says disk1s1 or disk2s2 or whatever.
    9. Type: sudo fsck_hfs -f /dev/rdisk2s2 (rdisk2s2 being where the HFS was from step 8, adding an "r" at the front)
    If fsck_hfs bails because of too many errors, just repeat until it says the FS is fine. That irks the bejeezus out of me as well, but has always worked for me so far.
    7. Still in TDM, and working on the backup copy, I used the above commands to first "see" the sparsebundle volume, and then to run fsck against it. Sure enough, my results were similar to the above: major errors returned the first 3 or 4 times I ran it, then eventually it said the FS was OK, and then VOILA! My first success- the sparsebundle volume actually mounted in Disk Utility and Diskwarrior, both of which were running in the background.
    8. The Filevault password now worked, and I ran Diskwarrior's "Rebuild Directory" multiple times against the mounted sparsebundle volume. Numerous errors were evident and Diskwarrior created a "Rescued Items" folder.
    Obviously, whatever the client had done did a major FUBAR to her home folder. Eventually the number of errors in red was reduced to a point where it would not reduce further. About 1G of data appeared to be in the Filevaulted Home folder. This didn't square with the supposed size of the sparsebundle, but I left that issue hanging for the moment.
    9. I repeated the command line procedure against the MB's internal drive- successfully mounting and decrypting the Filevaulted Home folder. Similar results were obtained where it took multiple passes of fsck to mount the disk, then a number of passes of Diskwarrior to repair the directory as much as possible.
    10. Knowing that I had rescued at least some of the client's files, and that I had a "repaired" backup of the sparsebundle- I decided at this point to wipe the MB's interal drive and do a complete re-install of the system and all software updates- knowing I would be eventually transferring her user files back over.
    11. Back to the discrepancy of visible files in the mounted sparsebundle:
    I decided to get out the howitzers and use Data Rescue.
    A "deleted files" scan found another 28G of actual user files!
    12. The task of organizing and transferring the user files back over to the MacBook was pretty labor-intensive but ultimately pretty successful for the most part; there were however a significant number of corrupted files which could not be salvaged. These mostly included user prefs, some Pages documents, and some images. Most all of her music and movies were still intact.
    At first I tried repairing some of the corrupt image files by using Graphic Converter to re-save images that Photoshop otherwise couldn't open. This proved to be incredibly tedious and instead, I simply imported all the images en masse into iPhoto. iPhoto very nicely refused import of all the corrupted images (something like a couple hundred out of multiple thousands).
    The end result was quite satisfactory: I got the client a new power cable for their MyBook and a did a complete Time Machine backup to it. They were quite happy since I had recovered just about all their documents, music, movies, and photos...
    But without finding that post and method by GuyEWhite, the outcome probably would have been far different, so again- thanks Guy!
    <Edited by Host>

    This is the single most useful entry for FileVault problems I ever encountered! Thanks a lot!
    Just a few notes to add:
    (1) Use this to access your FileVault image:
    sudo hdiutil attach -nomount -readwrite -noautofsck -noverify /Users/(username)/(username).sparsebundle
    Reminder: "(username)" is your short username. This will prevent Snow Leopard from traing to repair while accessing the FileVault bundle.
    If your computer crashed your sparsebundle will be at "/Users/.(username)/(username).sparsebundle", not at "/Users/(username)/(username).sparsebundle", the difference is the "." in the folder name!
    (2) After repairing with fsck_hfs you should detach the sparse bundle again:
    hdiutil detach /dev/diskX
    where "X" can be anything from 0 to whatever, matching the output after the "hdiutil attach" command.
    Beswt,
    Apple*

  • Is anyone doing disaster recovery for a J2EE application?

    We generally use database log shipping to maintain a standby database for our ABAP instances.  We can successfully fail over our production application to our disaster recovery site with no real issues.  With the J2EE instances (EP, ESS/MSS, BI, etc), we have a few concerns:
    hostname cannot change, without going through a system copy procedure, so we would have to keep the hostnames in DR the same. (for example, ref: oss note 757692 - changing hostname is not supported)
    fully qualified domain name - from what I understand, there are potentially issues with changing the fqdn, for example SSO certificates, BSPs, XI has issues, etc.
    we can't keep both hostname and fqdn the same between DR and production, or we could never do a DR test.
    Has anyone implemented disaster recovery for any SAP J2EE application that has run into these concerns and addressed them?  Input would be greatly appreciated regarding how you addressed these issues, or how you architected your disaster recovery implementation.
    Regards,
    David Hull
    The Walt Disney Company

    I haven't done this personally, but I do have some experience with these issues in different HA environments.
    To your first point:  You can change the hostname, note 757692 tells you exactly how to do it.  However like the note says, "Changing the name of a host server in a production system is not automatically supported by SAP."  When it says "supported by SAP" I think it means SAP the company, not SAP's software.  So I would contact SAP to see if this configuration would be covered under your service agreement.  Then you have to think about whether you want to do something that isn't "officially supported" by SAP.  Also I'm sure you'll need some kind of additional licensing for the DR systems as their hardware keys will de different.
    To your second point:  As for SSO certs (SAP Login Tickets), I think they should still work as long as the SID and client number of the issuing system remain the same.  I don't think they are hostname or fqdn dependant.  For BSPs I would think they would still work as long as they use relative paths rather than absolute paths.  And for XI... I have no idea what kind of issues may arise, I'm not an XI guy.
    Again, I haven't done what you're describing myself.  This is just based on my HA experiences.
    Hope this helps a little,
    Glenn

  • Need Help In Returning to Mail; Thunderbird Creates Disaster!

    What I love about Mail is that I use multiple Macs and can keep all my .Mac and other email on the respective email servers, thus having access to all my email on all my machines wherever I might be. What I don't like about Mail is the weak spam filter.
    So, I tried Mozilla's Thunderbird and disasterous results. Whereas Mail keeps all emails on your respective email servers by default, Thunderbird yanks all messages off the servers and puts them on your local computer, by default, and without asking your permission to do so.
    Does anyone know how I can import all that Thunderbird mail back to Mail? Mozilla's instructions on how to do so don't work.
    Thanks.

    My "judgemental and condesending attitude" is a consequence of your judgemental and outrageous attitude towards other people's work. It's you who started this thread with the wrong attitude. My attitude is a direct consequence of that.
    is not helpful on a forum like this
    Well, that's debatable, especially taking into consideration that how I behave depends entirely on how the participant I'm "talking" with behaves. I can see why my attitude might appear not being helpful to someone with your attitude, though.
    and unquestionably out of the ordinary
    This I agree with. Other participants might simply have ignored you, as I usually do in cases such as this. It happens, however, that your wrong comments could mislead other users into believing that Thunderbird could cause them some "disaster", which is clearly not the case, so I felt that I had to reply to you somehow.
    You are wrong in your comments and you clearly have
    no idea what you are talking about.
    LOL. Are you sure it's me who doesn't know what is he/she talking about? Would you dare enlightening me?
    your further comments are way out of line.
    Keeping people like me from making such comments is easy. Just don't make yourself comments which are also way out of line, such as "Thunderbird Creates Disaster!" when what happened in fact is just that you didn't set up the account the way you wanted it to work.
    Perhaps you might be better served to keep your "preachy"
    opinions to yourself.
    If you don't like your way out of line comments to be refuted in a similar way, then it's you who should keep your "preachy" opinions to yourself. It's you who started this thread and chose the tone that should be used in it, remember?

  • SharePoint 2010 backup and restore to test SharePoint environment - testing Disaster recovery

    We have a production SharePoint 2010 environment with one Web/App server and one SQL server.   
    We have a test SharePoint 2010 environment with one server (Sharepoint server and SQL server) and one AD (domain is different from prod environment).  
    Servers are Windows 2008 R2 and SQL 2008 R2.
    Versions are the same on prod and test servers.  
    We need to setup a test environment with the exact setup as production - we want to try Disaster recovery. 
    We have performed backup of farm from PROD and we wanted to restore it on our new server in test environment.Backup completed successfully with no errors.
    We have tried to restore the whole farm from that backup on test environment using Central administration, but we got message - restore failed with errors.
    We choosed NEW CONFIGURATION option during restore, and we set new database names... 
    Some of the errors are:
    FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified user or domain group was not found.
    Warning: Cannot restore object User Profile Service Application because it failed on backup.
    FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    Verbose: Starting object: WSS_Content_IT_Portal.
    Warning: [WSS_Content_IT_Portal] A content database with the same ID already exists on the farm. The site collections may not be accessible.
    FatalError: Object WSS_Content_IT_Portal failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified component exists. You must specify a name that does not exist.
    Warning: [WSS_Content_Portal] The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
    RESTORE DATABASE is terminating abnormally.
    FatalError: Object Portal - 80 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    ArgumentException: The IIS Web Site you have selected is in use by SharePoint.  You must select another port or hostname.
    FatalError: Object Access Services failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: Object parent could not be found.  The restore operation cannot continue.
    FatalError: Object Secure Store Service failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified component exists. You must specify a name that does not exist.
    FatalError: Object PerformancePoint Service Application failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: Object parent could not be found.  The restore operation cannot continue.
    FatalError: Object Search_Service_Application_DB_88e1980b96084de984de48fad8fa12c5 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    Aborted due to error in another component.
    Could you please help us to resolve these issues?  

    I'd totally agree with this. Full fledged functionality isn't the aim of DR, getting the functional parts of your platform back-up before too much time and money is lost.
    Anything I can add would be a repeat of what Jesper has wisely said but I would very much encourage you to look at these two resources: -
    DR & back-up book by John Ferringer for SharePoint 2010
    John's back-up PowerShell Script in the TechNet Gallery
    Steven Andrews
    SharePoint Business Analyst: LiveNation Entertainment
    Blog: baron72.wordpress.com
    Twitter: Follow @backpackerd00d
    My Wiki Articles:
    CodePlex Corner Series
    Please remember to mark your question as "answered" if this solves (or helps) your problem.

  • SharePoint 2013 Search - Disaster Recovery Restore

    Hello,
    We are setting up a new SharePoint 2013 with a separate Disaster Recovery farm as a hot-standby.  In a DR scenario, we want to restore all content and service app databases to the new farm, then fix any configuration issues that might arise due to changes
    in server names, etc...
    The issue we're running into is the search service components are still pointing to the production servers even though they're in the new farm with completely different server names.  This is expected, so we're preparing a PowerShell script to remove
    then re-create the search components as needed.  The problem is that all the commands used to apply the new search topology won't function because they can't access the administration component (very frustrating).  It appears we're in a chicken &
    egg scenario - we can't change the search topology because we don't have a working admin component, but we can't fix the admin component because we can't change the search topology.
    The scripts below are just some of the things we've tried to fix the issue:
    $sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
    $local = Get-SPEnterpriseSearchServiceInstance -Local;
    $topology = New-SPEnterpriseSearchTopology -SearchApplication $sa;
    New-SPEnterpriseSearchAdminComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchCrawlComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchIndexComponent -SearchTopology $topology -SearchServiceInstance $local -IndexPartition 0 -RootDirectory "D:\SP_Index\Index";
    $topology.Activate();
    We get this message:
    Exception calling "Activate" with "0" argument(s): "The search service is not able to connect to the machine that 
    hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a' 
    in search application 'Search Service Application' is in a good state and try again."
    At line:11 char:1
    + $topology.Activate();
    + ~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
        + FullyQualifiedErrorId : InvalidOperationException
    Also, same as above with
    $topology.BeginActivate()
    We get no errors but the new topology is never activated.  Attempting to call $topology.Activate() within the next few minutes will result in an error saying that "No modifications to the search topology can be made because previous changes are
    being rolled back due to an error during a previous activation".
    Next I found a few methods in the object model that looked like they might do some good:
    $sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
    $topology = Get-SPEnterpriseSearchTopology -SearchApplication $sa -Active;
    $admin = $topology.GetComponents() | ? { $_.Name -like "admin*" }
    $topology.RecoverAdminComponent($admin,"server1");
    This one really looked like it worked.  It took a few seconds to run and came back with no errors.  I can even get the active list of components and it shows that the Admin component is running on the right server:
    Name ServerName
    AdminComponent1 server1
    ContentProcessingComponent1
    QueryProcessingComponent1
    IndexComponent1
    QueryProcessingComponent3
    CrawlComponent0
    QueryProcessingComponent2
    IndexComponent2
    AnalyticsProcessingComponent1
    IndexComponent3
    However, I'm still unable to make further changes to the topology (getting the same error as above when calling $topology.Activate()), and the service application in central administration shows an error saying it can't connect to the admin component:
    The search service is not able to connect to the machine that hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again.
    Lastly, I tried to move the admin component directly:
    $sa.AdminComponent.Move($instance, "d:\sp_index")
    But again I get an error:
    Exception calling "Move" with "2" argument(s): "Admin component was moved to another server."
    At line:1 char:1
    + $sa.AdminComponent.Move($instance, "d:\sp_index")
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
        + FullyQualifiedErrorId : OperationCanceledException
    I've checked all the most common issues - the service instance is online, the search host controller service is running on the machine, etc...  but I can't seem to get this database restored to a different farm.
    Any help would be appreciated!

    Thanks for the response Bhavik,
    I did ensure the instance was started:
    Get-SPEnterpriseSearchServiceInstance -Local
    TypeName : SharePoint Server Search
    Description : Index content and serve search queries
    Id : e9fd15e5-839a-40bf-9607-6e1779e4d22c
    Server : SPServer Name=ROYALS
    Service : SearchService Name=OSearch15
    Role : None
    Status : Online
    But after attempting to set the admin component I got the results below.
    Before setting the admin component:
    Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
    IndexLocation : E:\sp_index\Office Server\Applications
    Initialized : True
    ServerName : prodServer1
    Standalone :
    After setting the admin component:
    Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
    IndexLocation :
    Initialized : False
    ServerName :
    Standalone :
    It's shown this status for a few hours now so I don't believe it's still provisioning.  Also, the search service administration is still showing the same error:
    The search service is not able to connect to the machine that hosts the administration component. Verify that the administration
    component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again.

  • 10.5.6 disaster

    well, what a disaster
    10.5.6 update via Software update failed. Since then I have been experiencing almost everything, from permission problems, Mail and iPhoto crashes, applications who use carbon lib not working anymore, Compressor says it's not been properly installed(I didn't do anything to it) you name it, I got it.
    Called Apple support and they talked me through some permission repair which I had done before and they didn't believe me. The JAVA permission were not able to be fixed. The permissions sort of changed each other around and it looked like the problem was that a lot of the java installation works through aliases. Then they told me that the only way to fix this would be a clean install from my Leopard disc and then upgrade via the combo update to 10.5.6.
    So there I went and bought a DNS-323 and two 1TB drives, picked up a $ 170 fine for not being buckled up, and slaved through the night to backup my system drive so I could format it. Then in the morning I formatted the system drive and re-installed Leopard. That worked like a charm. Then I went through various updates and SW installations and when it was up to 10.5.6 I ran into the same problem again. The update failed. Same message, update file downloaded fresh(I did not use any old files anywhere). Update can not be installed. Everything until then went fine. Heh, not a problem, just did the same thing again and headed for the standalone download of the combo update for 10.5.6. Just the download doesn't verify after it's been downloaded. Now, what I am to do now. I checked all the forums(the ones I know) and checked all the possible problems. I even tried to download directly connecting to my cable modem via ethernet after one forum entry proposed the download could be corrupted by the wireless router. Same result. Then of course I do wonder, why all the other disc images I downloaded and installed on the way to reconstructing my system do work, but only the 10.5.6 download from the APPLE site doesn't. Isn't that strange ? Well, at the end of the rope you've got to call APPLE. They've never let me down so far. But now, they did. Simply told me that my 90 days since buying Leopard are over. And that the Apple support guy who 4 days ago talked me into buying $ 700 of hardware and format my system drive, should not have told me that.
    I've lost about a week's worth of time dealing with this so far. And I am not even in a position right now where I have the system up and running. I'll try to restore the system to a 10.5.5 status, again installing from scratch.
    I haven't upgraded any of the other MACs yet and won't do so until I feel it is safe to do so.
    I am running one G5 Dual 1.8, a MBP, two MB, an iPhone and not a single Windows box. When I left the Windows world it was for being tired to test their software they weren't willing to test on their own before releasing to their customers. If what I am experiencing right now is the NEW APPLE, then the ascent of the MAC world will come to a grinding halt very soon. If it wouldn't be for the 2 ft thick ice on the lake out there, that's where the MACs would have gone last night.
    And all I did is to accept a System upgrade proposed to me by APPLE. I am deeply concerned about where APPLE is heading. Looks like we picked up a bunch of MS skills on the way to the top.
    Anybody out there with similar problems ? I am especially interested in how I'd be able to download a valid copy of the 10.5.6 update. A couple of forums and blogs mention that there were corrupt update files on the APPLE site but the last time I downloaded was earlier today and I'd not expect APPLE not to fix a broken download file. So the problem must be somewhere else. But reading on all the MAC forums out there, it is pretty obvious that this is a known problem. Something went very wrong with this update.
    And no, I am not frustrated, I am severely p...d off. I don't make money to fill the fridge from solving APPLE created problems for 4 days.
    cmon, APPLE, get your act together and help us
    any help appreciated

    Thanks for your reply and sorry for the delayed response - have been away for a few days over the holidays.
    I understand your point about backups and disk permissions. I do have a back up copy and will adopt using the 'verify disk' step in future; so, for me the loss shouldn't amount to any more than the time I have spent, as frustrating as that is. However, I think the point I'd like to make is that doing a disk permissions check is fine for those like me [or others on this forum] since I work in a hi-tech industry, and have worked in a Unix/Linux environment for the last ~20 years so understand about permissions etc. , but I know people with Macs for whom the key advantage is that the they largely 'manage themselves' seamlessly. So, if the verify step is simply a pushbutton check-then-repair step and omitting it can lead to such fatal errors, then why can't it be built into the update script as the first automatic step ? What I mean is; my computer reached this unusable state simply by pressing 'ok' on a planned update, so to the average-Joe user it looks like a 'stop and catch fire mode'.
    For info: I have now used Disk Warrior, which is reporting a disk malfunction, but can recover the user data. It seems to be a remarkable coincidence though, because there were no system problems exhibited before the update.

  • Disaster Recovery set-up for SharePoint 2013

    Hi,
    We are migrating our SP2010 application to SP2013. It would be on-premise setup using virtual environment.
    To handle disaster recovery situation, it has been planned to have two identical farms (active, passive) hosted in two different  datacenters.
    I have prior knowledge of disaster recovery only at the content DB level.
    My Question is how do we make two farm identical and how do we keep Database of both the farm always in sync.
    Also if a custom solution is pushed into one of the farm, how does it replicate to the other farm.
    Can someone please help me in understanding this D/R situation. 
    Thanks,
    Rahul

    Metalogix Replicator will replicate content, but nothing below the Web Application level (you'd still have to configure the SAs, Central Admin settings, deploy solutions, etc.).
    While AlwaysOn is a good choice, do remember that ASync AO is not supported across all of SharePoint's databases (see
    http://technet.microsoft.com/en-us/library/jj841106.aspx). LogShipping is a good choice, especially with content databases as they can be placed in a read only mode on the DR farm for an
    active crawl to be completed against them.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • ICloud disaster destroyed all my Pages work on the iPad!

    I have an IPad 2 with Pages that I have used for months to write business articles and research papers for publication. I have been using iCloud to synch it through a single user account.
    Last week, I was forced to upgrade my iMac to Lion because corrupted account permissions locked me out of all my home folders as well as my two different external drives and even my Time Capsule and Time Machine --- something supposedly impossible. But it all inexplicably happened. It took over a week for assorted sympathetic Apple care specialists and engineers to try to sort it out. The original problem was still never diagnosed or solved. Instead, my hard drive was erased, OSX 10.6 reinstalled, a date back up restored, and then Lion loaded as new. 
    As per Apple care instruction, iCloud in Lion was immediately activated on the iMac -- but no document files were ever shared.
    Four days later, after synching my iPad to iCloud, when I next opened Pages I watched in horror as in a mere second more than 30 documents vanished en masse right before my eyes.
    Somehow all these files never got backed up in my iTunes even though I had made several recent saves to the hard drive as well as an iCloud back up!
    And as for the files or even the mobile documents library folder being save in my recent Time Machine back ups, they never were. The back ups contain no iOS Pages text nor any deleted folder files to restore. 
    Despite all the Time Machine and iTunes back ups, and despite all the free space on the iPad where I actually did all the work without an internet connection, as soon as I synched they were mysteriously deleted by iCloud. 
    Pages on iPad was left empty and blank. Replacing the mobile documents folder on my iMac from a back up did at least managed to provide Pages file some icons with names dates, but they are ghost documents, mere skeleton files that can’t be opened. All my actual text content is now apparently nowhere; lost in Apple’s iCloud ether!
    This is an iPad which for months I used for Pages work without any internet connection, 3G or WiFi, and yet in an instant the iCloud on its own disintegrated everything it contained!  Apple essentially reached down into my device, into my personal property that I own, and without warning, notice, or permission and simply erased all my work; all my personal labor and effort that belonged to me. They essentially committed sabotage and theft.
    I have nothing saved anywhere despite being told how the iCloud was a trustworthy safety net, that Time Machine provided worry free back up, and that saving through iTunes could restore iPad content. Garbage. I did everything right and still got screwed.
    Further…
    In just the past two weeks alone I have suffered: my second iPhone -- a month old replacement for a previous defective iPhone itself only a month old -- experience a “software corruption” that caused it to continuously shut down if unplugged; my iMac and Time Capsule failing; and now this catastrophic iPad disaster.
    If this is not resolved with my fiels returned to me I simply will never use iCloud and I will never trust Apple again.
    I am disgusted and furious beyond words.
    JC
    Atlanta

    Hi JC,
    My iPad is in a similar state, although I arrived in the bizarro state by a different path.  I am also waiting for the apple experts to get back with me.  I have an iPad 1 with ios 5.1.1 and Pages 1.7.1.
    In the mean time, while I was waiting, I found a way to extract the TEXT from a large Pages document I did not want to lose;  it was the result of many years of notes.  This might save you some time instead of sifting through the raw index.db. 
    The index.db file is a sqlite3 file.  If you are on mac you can open it from a terminal with the command:
    sqlite3 index.db.
    I suggest you move the file out of the iTunes directory before you start working with it, just in case something unexpected happens, you don't want to lose the only copy of the data you are trying to recover. 
    1) Once the file is open via 'sqlite3 index.db' from a terminal command line, type ".tables" at the 'sqlite>" prompt and press enter.  It should show five tables:
    cullingState
    dataStates
    document
    objects
    relationships
    2) The file with your data is named dataStates, but it has many records in it.  I found my data by looking for the "largest" record.  Let me explain.  There are two fields in the dataStates table:
    identifier
    state
    Your data is stored in the 'state' field and it's data type is 'blob'. (I know that sounds stupid, but stick with me.)
    3) I found the largest record by typing  this query at the 'sqlite>' prompt:
    select identifier, length(state) from dataStates order by length(state) asc;
    My 'dataStates' table had 599 records, so many lines fly by, but the largest state field is at the bottom of the list with it's identifier.  Mine was:
    3|173212
    That identified a dataStates record with identifier '3' and 173,212 bytes of data in the 'state' field. Most of the bytes where printable ascii.  It WAS the text of my original Pages document.
    DISCLAIMER:  This method does not retrieve ANY formatting, pagination, tables bullets or numbering, images or other document rendering information.  It is just the TEXT.
    4)  Type this SQL command at the 'sqlite>' prompt to pull out the blob data:
    select hex(state) from dataStates where identifier=X;
    In my case the identifier was 3 so MY command was:
    select hex(state) from dataStates where identifier=3;
    5) Now is when the funs begins.   The large block of gibberish is 'ascii encoded binary' brought to you courtesy of the hex function. 
    To store it in a file, use the the sqlite3 '.output' command and type the preceding sql select.  Like the following:
    .output binacii.txt
    select hex(state) from dataStates where identifier=3;
    .output stdout
    This takes the large block of 'binary encoded ascii' and writes it to file named 'binascii.txt' rather than displaying it in the terminal window.  After the select statement ends, the output is switched back to the terminal window (stdout).    The 'binascii.txt' file is located in the same folder/directory that was the current directory when you launched the sqlite3 command.
    6) I moved my 'binascii.txt' file to a windows computer at this point and wrote a simple conversion routine to decode the 'ascii coded binary' into something usable with a work processor.  In "ascii coded binary", each 8-bit byte of data is encoded as two ascii characters, it enables sending binary data via the internet, etc, similar to uuencode, but NOT the same.
    7) Here is a small python program to convert the 'binascii.txt' file to the text from your Pages document on a Mac:
    import sys
    p = sys.stdin.read()
    s = []
    for location in range(0, len(p), 2):
       value = int(p[location:location+2], 16)
       if value >=32 and value <=126 or value==10:
           s.append(chr(value))
    sys.stdout.write(''.join(s))
    8) (NOTE: The last line has two single quotes (') not a double quote (")). I am assuming at this point that you know how to create a text file using a terminal window.   Add the python script to a text file, make sure you honor the indentations!  I named my python file 'convert.py'.  To use the python script, type the following command:
    cat binascii.txt | python convert.py > your.txt
    This method also assumes that the files 'binascii.txt', 'convert.py' and 'your.txt' are all in the same folder/directory.
    9) The resulting file, 'your.txt' can be opened by Word or Pages (you can name the file 'your.txt' to whatever you want).  In my case, I opened the file 'your.txt' with MS Word.  It had some rudimentary formatting in the form of paragraph breaks, single lines, some headings etc.  I did lose the data stored in tables, although I expect the text is still lurking somewhere in the index.db file.
    There were some ascii characters that were not part of my original document, but they were pretty easy to find and remove.  In my case the errant ascii characters were outside my document's large text block.   Also numbered and bulleted blocks run together.  Again the formatting information is still buried within the index.db file.
    The iOS Pages file format is more cryptic than the simple zip/XML format from the desktop OS X version of Pages.  
    Hope this helps a little, although you seem to be OK with the approach of sifting through the raw index.db file.  The Pages document  I wanted to recover was 63 pages long and manually sifting through all the crap in the index.db  drove me on a quest to find the real text within the index.db file, which i found.
    Sorry for the length of this post,
    Cheers

  • Recovering lost data from a very old backup (disaster recovery)

    Hi all,
    I am trying to restore and recover data from an old DAT-72 cassette. All I know is the date when the backup was taken, that is back in November 2006. I do not know the DBID or anything else except for the date.
    To recover this, I bought an internal SCSI HP c7438a DAT-72 tape drive on eBay and installed it on a machine running Windows 2003 Server SP2. I made a fresh Oracle 11g Enterprise Edition installation. HP tape drivers have been installed and Windows sees the tape drive without problem. To act as a Media Manager, I have installed Oracle Secure Backup. Oracle Secure Backup sees the HP tape drive without problems as well.
    I have to admit my information about Oracle is not very in-depth. I read quite a lot of documents, but the more I read the more confused I become. The closest thing I can find to my situation is the following guide about disaster recovery:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96566/rcmrecov.htm#1007948
    I tried the suggestions in this document without success (details below).
    My questions are:
    1. Is it possible to retrieve data without knowing the DBID?
    2. If not, is it possible to figure out the DBID from the tape? I tried to use dd in cygwin, also booted with Knoppix/Debian and Ubuntu CDs to dump the contents of the tape with dd but all of them failed to see the tape device. If there is any way to dump the raw contents of the tape on Windows, I would also welcome input.
    3. Is there any way at all to recover this data from the tape given all the unknowns?
    Thanks very much in advance,
    C:\Program Files>rman target orcl
    Recovery Manager: Release 11.2.0.1.0 - Production on Sat Mar 19 15:01:28 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    target database Password:
    connected to target database: ORCL (not mounted)
    RMAN> SET DBID 676549873;
    executing command: SET DBID
    RMAN> STARTUP FORCE NOMOUNT; # rman starts instance with dummy parameter file
    Oracle instance started
    Total System Global Area 778387456 bytes
    Fixed Size 1374808 bytes
    Variable Size 268436904 bytes
    Database Buffers 503316480 bytes
    Redo Buffers 5259264 bytes
    RMAN> RUN
    2> {
    3> ALLOCATE CHANNEL t1 DEVICE TYPE sbt;
    4> RESTORE SPFILE TO 'C:\SPFILE.TMP' FROM AUTOBACKUP MAXDAYS 7 UNTIL TIME 'SYS
    DATE-1575';
    5> }
    using target database control file instead of recovery catalog
    allocated channel: t1
    channel t1: SID=63 device type=SBT_TAPE
    channel t1: Oracle Secure Backup
    Starting restore at 19-MAR-11
    channel t1: looking for AUTOBACKUP on day: 20061125
    channel t1: looking for AUTOBACKUP on day: 20061124
    channel t1: looking for AUTOBACKUP on day: 20061123
    channel t1: looking for AUTOBACKUP on day: 20061122
    channel t1: looking for AUTOBACKUP on day: 20061121
    channel t1: looking for AUTOBACKUP on day: 20061120
    channel t1: looking for AUTOBACKUP on day: 20061119
    channel t1: no AUTOBACKUP in 7 days found
    released channel: t1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 03/19/2011 15:03:26
    RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
    RMAN>
    RMAN> RUN
    2> {
    3> ALLOCATE CHANNEL t1 DEVICE TYPE sbt
    4> PARMS 'SBT_LIBRARY=C:\WINDOWS\SYSTEM32\ORASBT.DLL';
    5> RESTORE SPFILE TO 'C:\SPFILE.TMP' FROM AUTOBACKUP MAXDAYS 7 UNTIL TIME 'SYS
    DATE-1575';
    6> }
    allocated channel: t1
    channel t1: SID=63 device type=SBT_TAPE
    channel t1: Oracle Secure Backup
    Starting restore at 19-MAR-11
    channel t1: looking for AUTOBACKUP on day: 20061125
    channel t1: looking for AUTOBACKUP on day: 20061124
    channel t1: looking for AUTOBACKUP on day: 20061123
    channel t1: looking for AUTOBACKUP on day: 20061122
    channel t1: looking for AUTOBACKUP on day: 20061121
    channel t1: looking for AUTOBACKUP on day: 20061120
    channel t1: looking for AUTOBACKUP on day: 20061119
    channel t1: no AUTOBACKUP in 7 days found
    released channel: t1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 03/19/2011 15:04:56
    RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
    RMAN>
    -----------------------------------

    Hi 845725,
    If the backups were created with OSB might be you can query the tape with obtool.
    http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14236/obref_oba.htmTo list pieces you could use <lspiece> within obtool.
    http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14236/obref_oba.htm#BHBBIFFEIf this works you should be able to identify the controlfile autobackup if it has the standard naming < c-dbid-date-xx > and you than know the DBID or you are able to restore a controlfile from a backup piece in the output list.
    Might be you have to install 9i or 10g rdbms software as 11g was released a year later in 2007.
    Anyway goodluck.
    Regards,
    Tycho

  • Welcome to the SQL Server Disaster Recovery and Availability Forum

    (Edited 8/14/2009 to correct links - Paul)
    Hello everyone and welcome to the SQL Server Disaster Recovery and Availability forum. The goal of this Forum is to offer a gathering place for SQL Server users to discuss:
    Using backup and restore
    Using DBCC, including interpreting output from CHECKDB and related commands
    Diagnosing and recovering from hardware issues
    Planning/executing a disaster recovery and/or high-availability strategy, including choosing technologies to use
    The forum will have Microsoft experts in all these areas and so we should be able to answer any question. Hopefully everyone on the forum will contribute not only questions, but opinions and answers as well. I’m looking forward to seeing this becoming a vibrant forum.
    This post has information to help you understand what questions to post here, and where to post questions about other technologies as well as some tips to help you find answers to your questions more quickly and how to ask a good question. See you in the group!
    Paul Randal
    Lead Program Manager, SQL Storage Engine and SQL Express
    Be a good citizen of the Forum
    When an answer resolves your problem, please mark the thread as Answered. This makes it easier for others to find the solution to this problem when they search for it later. If you find a post particularly helpful, click the link indicating that it was helpful
    What to post in this forum
    It seems obvious, but this forum is for discussion and questions around disaster recovery and availability using SQL Server. When you want to discuss something that is specific to those areas, this is the place to be. There are several other forums related to specific technologies you may be interested in, so if your question falls into one of these areas where there is a better batch of experts to answer your question, we’ll just move your post to that Forum so those experts can answer. Any alerts you set up will move with the post, so you’ll still get notification. Here are a few of the other forums that you might find interesting:
    SQL Server Setup & Upgrade – This is where to ask all your setup and upgrade related questions. (http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/threads)
    Database Mirroring – This is the best place to ask Database Mirroring how-to questions. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabasemirroring/threads)
    SQL Server Replication – If you’ve already decided to use Replication, check out this forum. (http://social.msdn.microsoft.com/Forums/en-US/sqlreplication/threads)
    SQL Server Database Engine – Great forum for general information about engine issues such as performance, FTS, etc. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/threads)
    How to find your answer faster
    There is a wealth of information already available to help you answer your questions. Finding an answer via a few quick searches is much quicker than posting a question and waiting for an answer. Here are some great places to start your research:
    SQL Server 2005 Books Onlinne
    Search it online at http://msdn2.microsoft.com
    Download the full version of the BOL from here
    Microsoft Support Knowledge Base:
    Search it online at http://support.microsoft.com
    Search the SQL Storage Engine PM Team Blog:
    The blog is located at https://blogs.msdn.com/sqlserverstorageengine/default.aspx
    Search other SQL Forums and Web Sites:
    MSN Search: http://www.bing.com/
    Or use your favorite search engine
    How to ask a good question
    Make sure to give all the pertinent information that people will need to answer your question. Questions like “I got an IO error, any ideas?” or “What’s the best technology for me to use?” will likely go unanswered, or at best just result in a request for more information. Here are some ideas of what to include:
    For the “I got an IO error, any ideas?” scenario:
    The exact error message. (The SQL Errorlog and Windows Event Logs can be a rich source of information. See the section on error logs below.)
    What were you doing when you got the error message?
    When did this start happening?
    Any troubleshooting you’ve already done. (e.g. “I’ve already checked all the firmware and it’s up-to-date” or "I've run SQLIOStress and everything looks OK" or "I ran DBCC CHECKDB and the output is <blah>")
    Any unusual occurrences before the error occurred (e.g. someone tripped the power switch, a disk in a RAID5 array died)
    If relevant, the output from ‘DBCC CHECKDB (yourdbname) WITH ALL_ERRORMSGS, NO_INFOMSGS’
    The SQL Server version and service pack level
    For the “What’s the best technology for me to use?” scenario:
    What exactly are you trying to do? Enable local hardware redundancy? Geo-clustering? Instance-level failover? Minimize downtime during recovery from IO errors with a single-system?
    What are the SLAs (Service Level Agreements) you must meet? (e.g. an uptime percentage requirement, a minimum data-loss in the event of a disaster requirement, a maximum downtime in the event of a disaster requirement)
    What hardware restrictions do you have? (e.g. “I’m limited to a single system” or “I have several worldwide mirror sites but the size of the pipe between them is limited to X Mbps”)
    What kind of workload does you application have? (or is it a mixture of applications consolidated on a single server, each with different SLAs) How much transaction log volume is generated?
    What kind of regular maintenance does your workload demand that you perform (e.g. “the update pattern of my main table is such that fragmentation increases in the clustered index, slowing down the most common queries so there’s a need to perform some fragmentation removal regularly”)
    Finding the Logs
    You will often find more information about an error by looking in the Error and Event logs. There are two sets of logs that are interesting:
    SQL Error Log: default location: C:\Program Files\Microsoft SQL Server\MSSQL.#\MSSQL\LOG (Note: The # changes depending on the ID number for the installed Instance. This is 1 for the first installation of SQL Server, but if you have mulitple instances, you will need to determine the ID number you’re working with. See the BOL for more information about Instance ID numbers.)
    Windows Event Log: Go to the Event Viewer in the Administrative Tools section of the Start Menu. The System event log will show details of IO subsystem problems. The Application event log will show details of SQL Server problems.

    hi,I have a question on sql database high availability. I have tried using database mirroring, where I am using sql standard edition, in this database mirroring of synchronous mode is the only option available, and it is giving problem, like sql time out errors on my applicatons since i had put in the database mirroring, as asynchronous is only available on enterprise version, is there any suggestions on this. thanks ---vijay

  • Advice on best way to setup Disaster Recovery for SOA Suite 10.1.3.4

    Hi Everyone,
    I need some advice on the best way to setup Disaster Recovery for a SOA Suite 10.1.3.4 install deploying JSF/ADF OC4J applications.
    The way we are trying to do it at the moment is manually copy the "applications" and "applications-deployments" folders for the OC4J application on the production server, then compress and ship the files across to the DR application server nightly. (We don't require high availability).
    In the event of a disaster we then extract the files and copy to the OC4J instance (pre-created and configured) on the DR server. Unfortunately to date we haven't been able to reliably setup a DR application (seem to mostly get 404 errors etc), even though the OC4J application has its connection pool resolved to the DR database and is showing as "up" in the ASConsole.
    My question is, is there a more "native" way to do what we are trying to do. We do not have Enterprise version of SOA Suite or 11g database so any advanced recovery features are not an option. The setup is also stand alone, i.e. we are not using clustering or RAC etc.
    Any ideas would be really helpful.
    Thanks,
    Leigh.
    PS we are also running the production apps server with Oracle Application server 10g 10.1.2.3.0 as the HTTP apache server (with Forms, Reports and Discoverer deployed) and the SOA Suite 10.1.3 applications use the 10.1.2 HTTP server via the HTTP to AJP bridge. So the 10.1.3 OC4J instance is configured to use AJP on port range 12501 - 12600.

    For enterprise solutions, AS Guard would work.
    http://download.oracle.com/docs/cd/B25221_04/core.1013/b15977/disasrecov.htm#sthref303
    However, since advanced recovery options are not available (as you said), then what you are doing should not be too bad.
    AMN

  • What is the quickest to upgrade disaster recovery system to NW04s in MSCS?

    Hi,
    I have an issue which I need assistance from SAP/Windows 2003 experts.
    Hope you can share your opinion on  an SAP upgrade issue we're grappling with. 
    <u><b>Objective :</b></u> Upgrade our SAP Bank Analyzer disaster recovery environment from NW04 to NW04s SR2, to be same as  production.  Both production and disaster recovery (DR) run under two node WIN2003 Enterprise MSCS.  The SAP CI usually runs on node A and Oracle 10g runs on node B.
    The Oracle software, parameters,  database files and  the SAP folder \usr\sap are  already matching. These files and folders are  on EMC mirrored disks, so this will  be automatically synchronized from production status.
    Therefore  I believe it is only the SAP software components on the local hard disks - binaries, DLLs, registry, DCOM, environment variable etc, that need to be upgraded on the two DR machines.
    For your information,  the current SAP 6.40 instance,  Oracle 10g instance, MSCS failover capability are all working in the DR environment as we just did an exercise a month ago to prove this.
    <u><b> Disaster Recovery Environment :</b></u>
    On shared drives that are synchronized between prod and DR  : SAP folder \usr\sap including the kernel binaries, Oracle parameter files, database datafiles, redologs, sapbackup etc.  
    <u><b>Issue :</b></u> What is the <b>quickest</b>  and <b>reliable</b> method to achieve the objective? Other than the SAP kernel binaries and profiles, I'm not certain what other related components are updated/replaced that are stored locally on each of the clustered machine.  
    There are three different approaches we can think of :
    1) Repeat the whole NW04s upgrade process in the DR environment. This takes several days. The exposure of not having a disaster recovery environment for several days is a minus.
                 <u><u><b>  or</b></u></u>
    2) Un-install the SAP CI and MSCS definitions. Synchronise the DR shared disks from production to have the upgraded production NW04s database contents and SAP folder \usr\sap.
    Use SAPINST to re-install either traditional CI or ASCS option 'System Copy". And then re-create the MSCS definitions for SAP CI using SAPINST again.
    The disadvantage is I can't trial this in any other of our environments. Also, we want to run with the traditional CI instead of ASCS until early next year for other reasons. So we would have to reverse the ASCS back to traditional CI with this method, by reversing the steps in OSS NOTE 101190 "Splitting CI after upgrade to 700".
                    <u><u><b>or</b></u></u>
    3) Somehow identify all the SAP components that are stored locally on each machine (eg. environment variables, registry settings, DCOM definitions, DLL in various folders). Then just copy  them from Production to DR machines or recreate them. I'm <u><b>not</b></u> confident of identifying the local SAP components. And I have not been able to find any good OSS NOTEs or forum threads on this topic. The most helpful OSS NOTE I can find is 112266  point 13 and 19.
    Thanks for any input.
    Benny Kwan

    See:
    * https://support.mozilla.com/kb/Form+autocomplete
    * Firefox > Preferences > Privacy > History: "Remember search and form history"
    * Autofill Forms: https://addons.mozilla.org/firefox/addon/autofill-forms/

  • IOS7 Disaster for Music Collectors

    I just installed iOS 7 on my iPhone and began playing around. I’m sure most of the reaction to it in the coming days will focus on the look and feel. But the first thing I noticed is that the Music app is an absolute disaster for people who like music. If your music collection consists of a dozen Greatest Hits albums you bought in college, you’ll probably be fine. Anyone else should stay as far away as possible.
    Here’s the first problem: The “Artist” and “Album” views display large thumbnails of album art, which means only four entries are visible per screen. OK, there’s a hint of a fifth if you look closely
    Like I said: Fine if you have a dozen or so albums in your collection. I have about 1,500 artists and 7,200 albums and over 100,000 songs. Scrolling through them just got much, much more difficult. Thanks, Apple!
    The second problem is worse. When you tap on an artist, the app takes you to all that artist’s albums (good) … but each album is expanded to show the songs it contains:
    So in order to scroll through an artist’s albums, you have to scroll past every song in every album. Again, not a problem for tiny music collections. But if you have several albums by a given artist, it quickly becomes annoying. It took me seven thumb-swipes to scroll through the 21 albums by Radiohead in my collection. The 500+ Pearl Jam albums? COMPLETELY UN-browsable.
    Speaking of which: It took a whopping 22 seconds from tapping on “Pearl Jam” to getting a list of Pearl Jam albums. That’s 22 seconds of the iPhone just sitting there, seemingly unresponsive. And that’s on an iPhone 5. Now, I have more than 500 Pearl Jam albums; that’s obviously not typical. But there’s a noticeable, if slight, lag when tapping on an artist with 15 or 20 albums. And, of course, the artists whose work you have the most of are the very artists you’re going to select most frequently, so those lags aren’t going to be an occasional thing. Not to mention I spent the better part of a YEAR organizing my music for itunes, and with a click of a button you change it all up.
    All of this adds up to a Music app that is absolutely horrible for people who like music. Just a terrible user experience.
    What’s really stunning about this is that a lot of people came to the Apple world via the iPod and iTunes. I’m one of them. And over the last few years, Apple has steadily been making the user experience more and more miserable for people who like music. iTunes is a bloated, glitchy, terrible trainwreck of a program that -- when it works at all -- seems to make it harder to do what you want with your music with each new update. Mobile device hard drive space has stagnated at a level too small to house large collections. And the Music app in iOS 7 is the clearest indication yet that Apple just does not give a **** about people who have more than a couple hundred songs in their library. Which is bizarre, given that selling music is a pretty big part of Apple’s business.
    It seems increasingly obvious than what will end up driving me away is the escalating awfulness of Apple’s music apps on both OS X and iOS.
    UPDATE: I’ve tried several third-party music apps looking for a replacement for Apple’s built-in app. Thanks to everyone who suggested alternatives in the comments and on Twitter … none of which were quite right. But I may have finally found something that will work: Audyssey.
    Audyssey is marketed primarily as providing “professional audio technologies to optimize your music for your headphones” -- you can tell the app what headphones you’re using, and it will optimize audio playback to fit them. I haven’t spent any time playing around with that feature, so I can’t comment on its efficacy. But as a way of browsing and playing my iPhone’s music library, it’s far better than Apple’s app. Browsing by artist displays 10 artists per screen, more than twice as many as you see in the Apple app:
    Browsing by Album shows 8 albums per screen, again more than twice as many as you get in the Apple app. Selecting an Artist containing 20-25 albums results in no noticeable lag. (There is about a 2-3 second lag when I select Bruce Springsteen, which is mildly irritating, but far better than the 22 seconds Apple’s own app requires to bring up a list of albums.) Most importantly: Once you select an artist, you get a list of albums by that artist -- and only the albums, not every song each album contains. This allows you to easily browse through artists for whom you may have large collections. Want to see what songs are on an album? Just tap the album name, and you’ll go to a new screen. Perfect. Simple. Exactly the way things should work.
    Audyssey gives you immediate access to all of the songs on your iPhone (including iTunes Match files that are stored in the cloud but not locally), with all the basic controls -- play/pause/repeat/shuffle/forward/back/etc. Album art & basic controls appear on your phone’s lock screen when appropriate. You have full access to your iTunes playlists. If there’s a way to add songs to playlists or create playlists within the app, I haven’t found it. That doesn’t bother me at all -- I rarely if ever create playlists on my mobile devices.
    The one flaw that I’ve encountered is that if a song is in the cloud but not stored on the device itself, Audyssey can stream the song -- but can’t do so in the background, so if you close the app while such a song is playing, it will pause. That’s annoying, but won’t affect you at all unless you use iTunes Match. And if you’re planning on listening to several songs that are stored in the iTunes Match cloud (a whole album or playlist, for example) you can always use the built-in Apple Music app to download the songs first, then play them in Audyssey. Or, of course, you could just play them in the Apple app -- Audyssey is fine as a music player, but then so is the Apple app. Where Audyssey really shines is in browsing your library. I’ve only been playing with it for a few minutes, so it may turn out to be buggy (its responsiveness even when browsing a pretty large library is a good sign on that front) or to have some drawback I haven’t yet encountered, but for now it will be my default music player.
    Another problem in the new IOS 7 music app: Once you select an artist, that artist’s albums are not sorted alphabetically. They’re sorted, as best as I can tell, by release date. So if you have, say, 20 Rolling Stones albums and want to listen to Some Girls, not only do you have to scroll through every song contained on every album that is listed before Some Girls before you get to the album you want, but you also can’t speed the process along by swiping as quickly as you can until you get to albums beginning with “S.”
    Instead, you have to remember that Some Girls came out in 1978, so it’s between It’s Only Rock n Roll (1974) and Emotional Rescue (1980). Unless of course you have something from Black and Blue (1976), which you probably don’t -- but, really, who can remember? So, yeah, Some Girls will be listed after It’s Only Rock n Roll, and 3-4 albums after Sticky Fingers (1971). Right where you’d expect it -- assuming, of course, that you happen to remember the chronology of album releases by every artist in your entire music library.
    Unless, that is, you don’t have release date included for every song and every album in your library (and who does?) in which case the album you’re looking for is … who the **** knows, really?
    Oh, and the whole chronological order thing only works if all your albums have release-date metadata (unlikely) and if the release date metadata reflects the album’s original release date For example: Let It Bleed was originally released in 1969, but on my phone the Music app lists it after Flashpoint (originally released in 1991) because my copy of Let It Bleed is apparently a re-issue from 2005. And Flashpoint is listed after Stripped, originally released in 1995, for similar reasons. And so on. As a result, the albums are listed in no real order whatsoever. You’re just going to have to scroll through the whole mess song by song.
    Don’t get me started on the Live shows that I spent months and months and months arranging so the appear in chrononlogical order AFTER the studio albums… now they are just thrown in between studio albums, and most of them are 25+ songs, so you scroll for literally 10 minutes or more trying to find an album… oh well, im at my destination, and I did not listen to any songs, Thanks Apple
    In short: This is a terrible app. Just awful.

    After upgrading since around 2002, and becoming paranoid after losing 65 songs, I went to great lengths to make sure that I had back-ups of all music, sometimes in two or three places.
    After a harddrive crash (which is NEVER a matter of IF, but ALWAYS a matter of WHEN), I recovered all music and imported it into iTunes 10.X.X. Then I bought an iPhone 5 and needed to upgrade to iTunes 11.X.X.
    What a freaking disaster. Mos of my 1486 songs were duplicated from 4-8 times.
    The reason I write all this is to say that MediaMonkey took days of work deduplicating my songs, and made it about 2 days.
    If you did not check out MediaMonkey, it might be worth a look, because it seemed to provide a nice front-end for iTunes collections, especially those which contained 100,000s of tunes. ALSO supports DNLA in case you have upper end audio equipment, and 10 common codecs are included for an additional 10 bucks when you buy the 30 dollar program.
    Just my 2 cents.
    BTW - I agree, that ever since iTunes 4.X.X, it seems that Apple designers are driven by making it painful for serious users to easily and effectively use iTunes. I am ready to move completely away from Apple and utilize Amazon Cloud which my music-minded son-in-law has been using almost since its inception - and he is pretty much an Apple fanboy - but one with hundreds of thousands of songs...
    Apple get your **** together - or we will vote with our feet (and dollars).

  • How do I recover the music I have purchased after a Windows disaster?

    How do I recover the music I have purchased after a Windows disaster? Windows on this computer stopped working correctly. I was told by HP support that the registry was corrupted and that I had to re-install Windows and programs from CDs that came with the computer. I did this and Windows started working correctly again. The only trouble is that all of the music I bought is gone. How do I recover the music I have purchased from iTunes after a Windows disaster and after re-installing Windows?
    Please tell me that you know about all of my song purchases and that there is a way I can download all of these to this computer again without having to pay for them again.
    I have learned my lesson about Windows. In the short run I plan to buy a backup drive that I can plug into this computer that I can restore from if and when something like this happens to Windows again. In the long run I plan to buy an Apple computer the next time I buy a computer.

    You could try contacting the iTunes Music Store Customer Service and you may be able to persuade them to sanction a second free download however they are not under any obligation to do so. The policy on lost songs is that you have to pay to download them again:
    "Once a Product is purchased and you receive the Product, it is your responsibility not to lose, destroy, or damage the Product, and Apple shall be without liability to you in the event of any loss, destruction, or damage."
    You can email them from any of the links on this page: iTMS Customer Service

  • GE 70 OND Win 8.1 upgrade disaster.

    GE 70 OND gaming notebook.
    Hated win 8 that it came with, and was using third part workarounds to give me a workable machine, but then foolishly decided to try 8.1.
    First I needed to download windows updates. I normally have these turned off as I've had trouble with "updates" before. This was no exception. During the installation of one batch of win 8 updates the machine crashed, and could not be recovered.  System reset would not work fully. Machine bricked!!
    Bought a win 8.1 complete install disk and tried installing it after formatting the SSD. Finally got it going, and loaded drivers from the disk supplied with the machine.
    Most things sort of worked - enough to recover the files I had been working on. At this point I felt it was mainly drivers causing the trouble, so decided to format the disks again and do a complete clean install.
    Big mistake....
    Now I can't even get through the win 8.1 file set up.  At between 10 to 20% through unpacking files it crashes with a 0x80070570 error. Have also had a BAD_POOL_ERROR too.
    Won't let me  load win7 either.
    Now it just goes to the BIOS screen when I turn it on.
    Any suggestions ???
    It's under warranty, but if I can fix it without sending it away, I'll be happy.
    Peter

    UPDATE:
    Still waiting for the instal disks from MSI.  I don't expect to see them after this time.
    Bought an OEM copy of Win8.0, but still won't install. Still stops at round 18%.
    I've now spent $300 trying to get out of this upgrade disaster, and I think I'll just have to call it quits.
    The plan is to buy a new notebook **removed different brand promotion**
    I'll leave the MSI in the cupboard in case I ever do find a cure.

Maybe you are looking for

  • Itunes erased all the album and artist info when reloading library!!!

    My problem is a big one. I've got 8600+ songs (800+ albums) on my hard drive. These are all from physical CDs; I ripped them in wav, then converted them to mp3. When I did this, I kept both the wav AND mp3 files in each respective album folder; about

  • Connecting apple tv2 to a projector, but need to use optical from apple tv to connect to something, to get sound.. What do I buy

    Ive got an apple tv, and a new hd projecter. I've now got a huge lovely picture on the wall, but I've zero sound.... The projector has no output for sound, and is conected by hdmi...which leaves me with one option. Using the optical out from apple tv

  • Unable to calculate DEPRECIATION

    Hi All, I am Doing CAPEX Planning for one of the client.. In Cape I am Calculating The Depreciation based on simple formula Depreciation=StdCapex/Life.. My LOgic is.. *XDIM_MEMBERSET RptCurrency=LC *XDIM_MEMBERSET CAPACCOUNT=StdCapex,Life,Depreciatio

  • Flash Builder 4-Adobe Application Manager

    I installed a trial version of Flash builder, but everytime when I want to open/run it it gives me this error: "Adobe Application Manager is needed to run this trial, Download the Adobe Application Manager from www.dobe.com"...I searched for this to

  • Table field buzei

    Is there any table i can get the details BUZEI - Number of Line Item Within Accounting Document quickly given the Accounting Document Number...... Thanks Keshini