Downgrading the kernel without breaking my system

Hello,
A few months ago, I was using a old (2011) version of the kernel, because of regressions in WiFi support and in sound quality. But when /lib became a symlink and the glibc update came ( http://www.archlinux.org/news/the-lib-d … a-symlink/ ), my system was broken, and I had to upgrade to a more recent version of the kernel using a LiveCD.
So, is there a way to downgrade safely to an older version of the kernel without breaking my system?
Thanks in advance.

/lib is a symlink... what happens if you try to install the older kernel?
Alternatively, re-build the package of an older kernel. (Though I doubt re-building is necessary, I offer it as a possibility.)

Similar Messages

  • [SOLVED] Downgrading the Kernel

    I just did a system upgrade which gave me the 2.6.32-ARCH Kernel. Problem is, the RT2860 wifi driver doesn't seem to play nicely with this kernel. I now have no network connection.
    From what I can see, there doesn't seem to be a patch available for this? Which is why I want to revert back to my old kernel. I had a looksy at the ArchWiki page on downgrading packages but my problem lies with ...what packages do I need to downgrade?
    Do I just 'pacman -rsn Kernel26 Kernel26-firmware Kernel26-headers' then install those three packages from my packman cache? How do I go about fixing modules and whatnot... OSS (sound system) complained about requiring the kernel header package. Do I also need to downgrade this?
    Last edited by tehkane (2010-01-03 04:09:45)

    There is a thread on this issue http://bbs.archlinux.org/viewtopic.php?id=86447
    The driver works fine: 2.6.32 renamed the interface from ra0 to wlan0 - if you make the change in your network manager you won't need to downgrade the kernel.

  • Does compiling the kernel makes the system run significantly faster?

    It used to be when you compiled your kernel (at least ten years ago), you would get a significant speed boost. However, with the speed of machine nowadays, I wonder how much, if any. Does compiling the kernel have a significant increase in daily performance or otherwise? If so, what portion of the system does it makes faster than before? In short, Is it worth it? I appreciate any comments.

    lilsirecho wrote:
    Broch:
    The title of this thread doesn't limit responses to just redoing the kernel since it says "system run significantly faster".
    The kernel is not the system...the system runs from HDD and ram. 
    To speed up that process, run everything possible in ram for system speed up.
    I think for 500 packages loading from scratch to desktop in 45 seconds is fast.  And running after that in ram is super fast system speed "significantly faster".
    The speed I quote is possible now.  With UDMA capability in the kernel, at least half of the install time is probable.
    And that is for over 500 packages into KDE no less!
    Using flash in IDE removes the latent search from the system and therefore must be faster.
    Changing the kernel will indeed speed up the system...but only if it includes UDMA for the flash drives already available but not configured with the present kernels. 
    Using flash drives for ide cache repos will allow faster boot time, and reduce the load on the system.  You don't normally run many,many programs at once so put them in cache with a script to permit fast loading .
    That is "system running significantly faster" and all in ram, the fastest in the computer.
    no
    Does compiling the kernel makes the system run significantly faster?
    compiling kernel, not whatever
    I did not suspect that this requires explanation?
    but if you really want the noise, let's leave "faster?"
    and answer is: yes/no
    whatever that means.
    or you are joking
    anyway seems that results may vary.
    Last edited by broch (2008-02-06 04:32:01)

  • Kernel Upgrade Breaks Network Connection

    My server uses:
    [15:48] (tunix@kulupler ~)$ pacman -Qs kernel
    local/kernel26 2.6.19.1-1
        The Linux Kernel and modules
    local/kernel-headers 2.6.19.1-1
        Kernel headers sanitized for use in userspace
    for kernel. The latest kernel upgrade (2.6.19.2-1) breaks network connection after ~5 minutes from the boot. When the network connections fails, kernel says:
    Jan 12 15:10:44 kulupler ADDRCONF(NETDEV_UP): eth0: link is not ready
    The connection never comes either after a network service restart, nor ifconfig eth0 up/down etc... The only solution is to restart, but after ~5 min. it fails again.
    So, as a solution, I just downgraded the kernel to 2.6.19.1-1 again.

    The AirPort Extreme base station (AEBS) is putting your PowerBook on a separate subnet. To solve the problem, configure the AEBS so that it does not distribute IP addresses.

  • [solved...sortof]kernel upgrade breaks system

    Hi, I am unable to log in to arch in any way after doing a system upgrade.
    The kernel was upgraded along with a couple of other things and I now cant start X, nor get to a tty.
    My target was graphical at first, then I used an install disc to disable graphical and enable multi-user.target.
    multi-user.target presents me with a blank black screen
    graphical.target presents me with a blank blue screen (the blue from LXDM) which flashes some boots.
    Im using the xf86-video-ati driver, Will try and downgrade kernel now.
    Last edited by jrussell (2012-12-21 20:36:22)

    ok lts kernel works
    Here is my pacman log:
    [2012-12-15 11:32] Running 'pacman -Syu'
    [2012-12-15 11:32] synchronizing package lists
    [2012-12-15 11:33] starting full system upgrade
    [2012-12-15 11:52] upgraded chromium (23.0.1271.95-1 -> 23.0.1271.97-1)
    [2012-12-15 11:52] upgraded cpupower (3.6-1 -> 3.7-2)
    [2012-12-15 11:52] upgraded gnupg (2.0.19-2 -> 2.0.19-3)
    [2012-12-15 11:52] upgraded htop (1.0.2-1 -> 1.0.2-2)
    [2012-12-15 11:52] ==> The "block" hook has replaced several hooks:
    [2012-12-15 11:52] fw, sata, pata, scsi, virtio, mmc, usb
    [2012-12-15 11:52] Replace any and all of these in /etc/mkinitcpio.conf with a single
    [2012-12-15 11:52] instance of the "block" hook
    [2012-12-15 11:52] upgraded mkinitcpio (0.11.2-1 -> 0.12.0-2)
    [2012-12-15 11:53] >>> Updating module dependencies. Please wait ...
    [2012-12-15 11:53] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
    [2012-12-15 11:53] ==> Building image from preset: 'default'
    [2012-12-15 11:53] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
    [2012-12-15 11:53] ==> Starting build: 3.6.10-1-ARCH
    [2012-12-15 11:53] -> Running build hook: [base]
    [2012-12-15 11:53] -> Running build hook: [udev]
    [2012-12-15 11:53] -> Running build hook: [autodetect]
    [2012-12-15 11:53] -> Running build hook: [modconf]
    [2012-12-15 11:53] -> Running build hook: [block]
    [2012-12-15 11:53] -> Running build hook: [filesystems]
    [2012-12-15 11:53] -> Running build hook: [usbinput]
    [2012-12-15 11:53] -> Running build hook: [fsck]
    [2012-12-15 11:53] ==> Generating module dependencies
    [2012-12-15 11:53] ==> Creating gzip initcpio image: /boot/initramfs-linux.img
    [2012-12-15 11:53] ==> Image generation successful
    [2012-12-15 11:53] ==> Building image from preset: 'fallback'
    [2012-12-15 11:53] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
    [2012-12-15 11:53] ==> Starting build: 3.6.10-1-ARCH
    [2012-12-15 11:53] -> Running build hook: [base]
    [2012-12-15 11:53] -> Running build hook: [udev]
    [2012-12-15 11:53] -> Running build hook: [modconf]
    [2012-12-15 11:53] -> Running build hook: [block]
    [2012-12-15 11:53] -> Running build hook: [filesystems]
    [2012-12-15 11:53] -> Running build hook: [usbinput]
    [2012-12-15 11:53] -> Running build hook: [fsck]
    [2012-12-15 11:53] ==> Generating module dependencies
    [2012-12-15 11:53] ==> Creating gzip initcpio image: /boot/initramfs-linux-fallback.img
    [2012-12-15 11:53] ==> Image generation successful
    [2012-12-15 11:53] upgraded linux (3.6.9-1 -> 3.6.10-1)
    [2012-12-15 11:53] upgraded pcmciautils (018-4 -> 018-5)
    [2012-12-15 11:53] upgraded perl (5.16.2-1 -> 5.16.2-2)
    [2012-12-15 11:53] upgraded python2 (2.7.3-2 -> 2.7.3-3)
    [2012-12-15 11:53] upgraded vim-runtime (7.3.712-1 -> 7.3.754-1)
    [2012-12-15 11:53] upgraded vim (7.3.712-1 -> 7.3.754-1)
    [2012-12-15 11:53] Exited with code 0
    [2012-12-15 12:17] Running 'pacman -S linux-lts'
    [2012-12-15 12:17] Exited with code 130
    [2012-12-15 12:18] Running 'pacman -S linux-lts'
    [2012-12-15 12:18] Exited with code 130
    [2012-12-15 12:18] Running 'pacman -S linux-lts'
    [2012-12-15 12:18] Exited with code 130
    [2012-12-15 12:19] Running 'pacman -S linux-lts'
    [2012-12-15 12:25] >>> Updating module dependencies. Please wait ...
    [2012-12-15 12:25] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
    [2012-12-15 12:25] ==> Building image from preset: 'default'
    [2012-12-15 12:25] -> -k /boot/vmlinuz-linux-lts -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-lts.img
    [2012-12-15 12:25] ==> Starting build: 3.0.54-1-lts
    [2012-12-15 12:25] -> Running build hook: [base]
    [2012-12-15 12:25] -> Running build hook: [udev]
    [2012-12-15 12:25] -> Running build hook: [autodetect]
    [2012-12-15 12:25] -> Running build hook: [modconf]
    [2012-12-15 12:25] -> Running build hook: [block]
    [2012-12-15 12:25] -> Running build hook: [filesystems]
    [2012-12-15 12:25] -> Running build hook: [usbinput]
    [2012-12-15 12:25] -> Running build hook: [fsck]
    [2012-12-15 12:25] ==> Generating module dependencies
    [2012-12-15 12:25] ==> Creating gzip initcpio image: /boot/initramfs-linux-lts.img
    [2012-12-15 12:25] ==> Image generation successful
    [2012-12-15 12:25] ==> Building image from preset: 'fallback'
    [2012-12-15 12:25] -> -k /boot/vmlinuz-linux-lts -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-lts-fallback.img -S autodetect
    [2012-12-15 12:25] ==> Starting build: 3.0.54-1-lts
    [2012-12-15 12:25] -> Running build hook: [base]
    [2012-12-15 12:25] -> Running build hook: [udev]
    [2012-12-15 12:25] -> Running build hook: [modconf]
    [2012-12-15 12:25] -> Running build hook: [block]
    [2012-12-15 12:25] -> Running build hook: [filesystems]
    [2012-12-15 12:25] -> Running build hook: [usbinput]
    [2012-12-15 12:25] -> Running build hook: [fsck]
    [2012-12-15 12:25] ==> Generating module dependencies
    [2012-12-15 12:25] ==> Creating gzip initcpio image: /boot/initramfs-linux-lts-fallback.img
    [2012-12-15 12:25] ==> Image generation successful
    [2012-12-15 12:25] installed linux-lts (3.0.54-1)
    [2012-12-15 12:25] Exited with code 0
    *EDIT
    downgrading kernel:
    [2012-12-15 12:34] Running 'pacman -U /var/cache/pacman/pkg/linux-3.6.9-1-x86_64.pkg.tar.xz'
    [2012-12-15 12:34] >>> Updating module dependencies. Please wait ...
    [2012-12-15 12:34] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
    [2012-12-15 12:34] ==> Building image from preset: 'default'
    [2012-12-15 12:34] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
    [2012-12-15 12:34] ==> Starting build: 3.6.9-1-ARCH
    [2012-12-15 12:34] -> Running build hook: [base]
    [2012-12-15 12:34] -> Running build hook: [udev]
    [2012-12-15 12:34] -> Running build hook: [autodetect]
    [2012-12-15 12:34] -> Running build hook: [modconf]
    [2012-12-15 12:34] -> Running build hook: [block]
    [2012-12-15 12:34] -> Running build hook: [filesystems]
    [2012-12-15 12:34] -> Running build hook: [usbinput]
    [2012-12-15 12:34] -> Running build hook: [fsck]
    [2012-12-15 12:34] ==> Generating module dependencies
    [2012-12-15 12:34] ==> Creating gzip initcpio image: /boot/initramfs-linux.img
    [2012-12-15 12:34] ==> Image generation successful
    [2012-12-15 12:34] ==> Building image from preset: 'fallback'
    [2012-12-15 12:34] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
    [2012-12-15 12:34] ==> Starting build: 3.6.9-1-ARCH
    [2012-12-15 12:34] -> Running build hook: [base]
    [2012-12-15 12:34] -> Running build hook: [udev]
    [2012-12-15 12:34] -> Running build hook: [modconf]
    [2012-12-15 12:34] -> Running build hook: [block]
    [2012-12-15 12:34] -> Running build hook: [filesystems]
    [2012-12-15 12:34] -> Running build hook: [usbinput]
    [2012-12-15 12:34] -> Running build hook: [fsck]
    [2012-12-15 12:34] ==> Generating module dependencies
    [2012-12-15 12:34] ==> Creating gzip initcpio image: /boot/initramfs-linux-fallback.img
    [2012-12-15 12:34] ==> Image generation successful
    [2012-12-15 12:34] installed linux (3.6.9-1)
    [2012-12-15 12:34] Exited with code 0
    kernel 3.6.9-1 works, 3.6.10-1 does not work, I dont know which other logs I should post?
    Last edited by jrussell (2012-12-15 10:38:32)

  • UEFI kernel Stub without copying or renaming the kernel

    Hi,
    I am discovering UEFI to prepare my new PC setup, which will only run ArchLinux, as my other personal & professional PCs.
    As the Linux Jernel offers a UEFI stub, I am considering avoiding using a "external" boot loader : I don't really see the point in my use case (no double boot).
    Reading carefuly the Wiki pages, it seems that it is required to either copy the kernel (and eventually the initramfs images)
    mv /boot/vmlinuz /boot/eif/EFI/archlinux/vmlinuz.efi
    or at least rename it, either manually or via a systemd/mkinitcipo script.
    I may have misinderstood, as I find the information and instructions quite confusing on the Wiki (but I need to better understand before suggestin any improvement).
    I don't like much the idea of coyping the kernel, I 'd like to have system configuration as little modified as possible, as it's the easiest way to make it last and benefits from ArchLinux's evolutions in the coming years (and that's what can be expected from a Linux distribution, by the way : avoid writing plenty of administration scripts for basic stuff).
    Hence, a few questions :
    - is it required to have an file named with a .efi extension ?
    - will it work with a gzip-compressed image vmlinuz (which is ArchLinux's default) ? I am abit surprised that it could work
    -a symlink would do the copy/rename trick... but we're stuck with FAT32, which does not offer symlinks... or did I miss something ?
    My impression so far is that Linux UEF stub (or ArchLinux) is not really ready yet. If you confirm that, I will give up and use what should be useless know, a boot loader.
    Thanks
    Last edited by iTanguy (2014-09-08 19:34:08)

    Thank you all, it really helps !
    I am back from weekend (I was away from my new PC, so I could hardly answer before) and could do a few tests this evening (before I had to leave the screen for a live TV program that could not be missed by my roomate).
    So
    - still did not manage to make the Linux Efistub boot directly, using efibootmgr, nor from the efi shell v1 included on the archlinux installl image
    - gave a try to gummiboot which indeed worked, it booted on disk to Arch at least once \o/ (then it was time for the Live TV program...)
    By the way, it means that booting to the eifstub indirectly works, as gummiboot apparenlty requires a EFI-stub linux image.
    teateawhy wrote:There are good reasons to make use of a boot loader, even without dual boot. For example a boot loader allows you to edit the kernel parameters easily without changing the EFI firmware entry.
    Agree, though I don't modify the boot options often (much less often than the kernel updates on ArchLinux).
    I am not against the idea of a boot manager, I have been using lilo & grub for years.
    And maybe I will also install linux-lts as a fallback kernel as well, so that could be another reason.
    But I still want to give the direct-to-EFISTUB way a try.
    You may see it as curiosity. But if (for simple use case, with limitations) it works, I will at last consider UEFI as a real improvement over BIOS (for my simple use cases).
    teateawhy wrote:
    - will it work with a gzip-compressed image vmlinuz (which is ArchLinux's default) ? I am abit surprised that it could work
    EDIT: Yes that works and here is how:
    "vmlinuz is not merely a compressed image. It also has gzip decompressor code built into it"
    Thanks, this is very interesting. I like to know how, why things work, or eventually don't. (in fact, I am stupid, I should have guessed: how could it work elseways ?)
    teateawhy wrote:
    My impression so far is that Linux UEF stub (or ArchLinux) is not really ready yet. If you confirm that, I will give up and use what should be useless know, a boot loader.
    It is ready now. But using the stub is not as convenient as booting with a boot loader.
    Hum re-reading myself, my words are far more agressive and definitive than what I meant. Especially considering that I had only spent 2 hours to read and try booting. But today, I am still disappointed not to have managed yet to boot to the EFI Stub directly (after more reading, these questions and answer and 2 hours of tests tonight).
    Hopefuly, the problem is "only" my poor quality EFI firmware ? (I have an Intel NUC D54250WYK, by the way)
    Tests to be continued tomorow (no Live TV program scheduled ).

  • Downgrading cups without breaking things

    Hi,
    I think I'd like to downgrade CUPS from 1.6 to 1.5. Since I installed the latest version, I can no longer see any network printers. I found out that this is because printer browsing using the 'cups' protocol is no longer supported since version 1.6. Apparently they've decided to remove some features in this update rather than add them. Interesting decision, to say the least Anyway, that's a different discussion altogether.
    The issue I'm having is that the CUPS server here at work runs version 1.3.7, and no avahi etc., so if I want to avoid a lot of hassle configuring all the printers here, I need this cups protocol support.
    So I downgraded CUPS back to 1.5, and things seemed to be working fine again... until I noticed that opening the 'print' dialog in evince would crash the application:
    evince: symbol lookup error: /usr/lib/gtk-3.0/3.0.0/printbackends/libprintbackend-cups.so: undefined symbol: ippSetOperation
    ... so it seems that this is not a viable option.
    So then I gave up and upgraded to 1.6 again. I figured I could just add the printers that I use regularly and leave it at that. But here's the thing I really don't get: when I want to add these printers, I have to provide PPD files. I never had to do this before, and I don't understand why this is suddenly necessary. I definitely don't have time to search for and test PPD files for all the printers here. So in the meantime I've downgraded back to 1.5, I guess I'll use adobe reader for printing, for the time being...
    I'm really kind of baffled by this, first of all because I don't understand their decision to completely remove support for this feature. Perhaps there is some way to recompile cups with this feature enabled? Second of all, I don't understand why I need PPD files now when I didn't before. Is there any way I can avoid this? I can live without printer browsing, as long as I don't have to search for PPD files (in fact I'd already caved and started looking for them, and for some of these printers the results didn't look very promising).
    So to summarise I'm looking for a way to either:
    - add network printers in CUPS 1.6 without having to supply a PPD file - it should 'just work', like it did before!
    - reenable 'cups' protocol support in CUPS 1.6 somehow (if necessary by recompiling it)
    - downgrade to CUPS 1.5 without breaking evince (and probably a bunch of other applications)
    If anybody has any idea how to achieve either of these 3 things, please do share Thanks!
    Sander
    EDIT: in retrospect I guess "Applications & Desktop Environments" would have been a more appropriate forum for this post. Feel free to move this topic if necessary. Apologies!
    Last edited by Sander (2012-11-05 17:17:14)

    Sander wrote:But here's the thing I really don't get: when I want to add these printers, I have to provide PPD files. I never had to do this before, and I don't understand why this is suddenly necessary.
    Before that, your CUPS was speaking to another CUPS server, which most likely had the correct PPD files to speak with the printer. Now you're trying to speak directly to the printer (over network). Hence you need the PPD file.
    Sander wrote:I'm really kind of baffled by this, first of all because I don't understand their decision to completely remove support for this feature. Perhaps there is some way to recompile cups with this feature enabled?
    Because Apple bought CUPS and they're not interested in maintaining something they don't use. They're interested in Avahi/Bonjour/mDNS only, so they removed the CUPS discovery/browsing protocol. And they removed it for good, so there's no way to just add a flag to recompile it in. You'd need to restore some deleted files from the Git repository and upgrade them to the current version, which is really not trivial at all.
    Sander wrote:So to summarise I'm looking for a way to either:
    - add network printers in CUPS 1.6 without having to supply a PPD file - it should 'just work', like it did before!
    Can you upgrade your print server to 1.6?
    Sander wrote:- downgrade to CUPS 1.5 without breaking evince (and probably a bunch of other applications)
    The problem is not Evince itself, it's Gtk See this thread for details. IMHO the best way to fix this with current version is to recompile gtk3 (and gtk2 if you're using a Gtk2 PDF viewer such as zathura) on your CUPS 1.5 machine.
    If you're using yaourt, this is quite simple to do:
    $ yaourt -G gtk3
    $ cd gtk3
    $ makepkg
    $ sudo pacman -U gtk3-*.pkg.tar.xz
    Yes, it's not really nice to have to do this. But it works.

  • How do I move a .fm to another folder without breaking the .book cross-references?

    Simplified situation:
    I currently have mybook.book containing:
       chapter1.fm
       chap2.fm
       appendixA.fm
       apdxB.fm
    I'd like to reorg the files such that mybook.book contains:
       chapters\chapter1.fm
       chapters\chap2.fm
       appendices\appendixA.fm
       appendices\apdxB.fm
    If I move the files into folders with Windows Explorer, it will of course break the cross-references.  I thought I could simply use right-click Rename on the file enteries within the book in FM to move them, but it won't allow the / or | chars.  How can I accomplish this reorg without breaking the cross-references?
    Thanks,
    Dave

    TWDaveWalker wrote:
    oh well, looks like I'll have to develop a procedure for the staff.  Does anyone have any non-mif suggestions for a smooth procedure?
    Not sure what the problem here is. You need neither a 3rd party plugin nor any MIF editing. Simply do it this way:
    Create the required folders (chapters/appendices) within your current book folder
    Open the book
    Open the chapter files and do a "Save as" into the \chapter subfolder
    Open the appendix files and do a "Save as" into the \appendices subfolder
    Delete the book file entries and re-fill the book with the newly saved files
    Every "Save as..." keeps xrefs intact, including the new path in the document's references. This reqires some steps, yes. But I think it's still better than doing edits to MIF files by someone who is not experienced doing this.
    Edit: Sorry, it's not a easy as I thought. I forgot that this is done in several steps, and previously saved files don't recognize path changes of files saved afterwards. So this is no solution.
    Bernd
    Message was edited by: Be.eM
    Reason: wrong approach, typing faster than thinking

  • How do I use a path as an argument without breaking the -classpath option?

    How do I use a path as an argument without breaking the -classpath option?
    I have the following Korn Shell Script:
    #!/bin/ksh
        CSVFILE=/EAIStorageNumbers/v1/0001/weekly05222006.csv
        OUTPUTFILE=/EAIStorageNumbers/v1/0001/08000000.txt
        PAGEMBR=0800F341
        cd /EAIStorageNumbers/v1/bin/
        java -classpath com.dtn.refinedfuels.EAIStorageNumbers $CSVFILE $OUTPUTFILE $PAGEMBRWhen I run the shell script, I return the following error:
    Exception in thread "main" java.lang.NoClassDefFoundError: /EAIStorageNumbers/v1/0001/weekly05222006/csv
    Thus, the -classpath option sees the first argument as a classpath. When I remove the -classpath option, I get a different error, which is another issue:
    Exception in thread "main" java.lang.NoClassDefFoundError: au/com/bytecode/opencsv/CSVReader
    at com.dtn.refinedfuels.EAIStorageNumbers.main(Unknown Source)
    Thoughts or suggestions on how I can get the -classpath option to work with the defined Korn Shell Script variables would be greatly appreciated.

    I think you're misunderstanding the classpath argument. This tells java where to look to find your classes. The argument you're supplying looks like the class that you want java to run. Run something like this: java -classpath $MY_CLASS_DIR my.package.MyClass arg1 arg2 arg3.

  • I have my iPhone 4s backed up on my mac but it seems it was encrypted with a password which i do not remember is there any other option to retrieve to the backup without restoring the device as a new one. Also I do not have access to a windows system.

    I have my iPhone 4s backed up on my mac but it seems it was encrypted with a password which i do not remember is there any other option to retrieve to the backup without restoring the device as a new one. Also I do not have access to a windows system.

    Sorry no, if you don't knnow the encrypted password, then you can't use that backjup.

  • Hi my boyfriend and I split up and we agreed I would keep the iPad. Since breaking up we are no longer on speaking terms but the iPad is still signed In to his Apple ID. Does anyone know how to reset the iPad without his password?

    Hi my boyfriend and I split up and we agreed I would keep the iPad. Since breaking up we are no longer on speaking terms but the iPad is still signed In to his Apple ID. Does anyone know how to reset the iPad without his password?

    No, there is no way to reset the iPad without the AppleID and pasword.
    See here: http://support.apple.com/kb/ts4515

  • How do I collect and organize files into a new folder without breaking the links to my media?

    I'm new to premiere pro, and learned that I probably should have figured out where to organize all of my files from the beginning, but in the post production phase, I have a bunch of audio and image files sitting all over my desktop, and I'm afraid that if I move them, I'll break the links to all of my media. Files I edited in CS6 Photoshop and Audition are of special concern (considering they get automatically routed to my desktop).
    Does anyone know how I can collect all of the files and media that were imported into my project, and and archive them into a folder without breaking the project's links to the original media? Perhaps there's a way to copy all of the files and move them into that folder, so that when I move the originals, I'll still maintain all of the links in the copied version?
    I tried using the project manager tool, but received an "unknown error message" each time I ran it. 
    Any organizational insight would be much appreciated!     

    Read Bill Hunt on ONE project setup idea http://forums.adobe.com/thread/919388?tstart=0
    Also read Metadata contained in folder http://forums.adobe.com/thread/1015001?tstart=0
    -and http://helpx.adobe.com/premiere-pro/using/transferring-importing-files.html

  • Is there a way to rename a template without breaking existing pages using the 'old' template?

    Hi guys,
    I am using CQ 5.4. and I want to refactor existing code of a project. Part of that involves renaming existing templates. Renaming one locally, I saw that existing pages break because they can't 'find' the old template. is there a way to rename a template without breaking those pages?
    Thanks,
    Alex

    I'm not aware of any GUI tool that does this in Leopard. you can change the names of the share points by hand by editing the appropriate plists located in /private/var/db/dslocal/nodes/Default/config/SharePoints/ but that's a pretty horrible way to do it and is the opposite of user friendly. don't know any other way though.

  • If the delta is break in my R3 production system.

    Hello All,
    How to initialise the delta if the delta is break in my production system in R3. The delta is fetching records in more then 1 ODS. Once set up how to check if no single record is missed.
    Is there any proocedure how to take care of the existing records in BW production so that we get all the records from R3 to BW.
    Thanks,

    Hello Sandeep,
    What do you mean by this ??? "Is there any proocedure how to take care of the existing records in BW production so that we get all the records from R3 to BW"
    Regards,
    Jeannie

  • Remove old kernels without the use of apt-get

    Greetings all, I am new to Linux. I am also trying to find a way to purge old kernels without using apt-get as /boot is entirely full and apt-get has no functionality at this point (I have tried 'apt-get -f' as well). I'll likely resort to trying one of the scripts in the bikeshed package to detect and remove unused kernels, but was wandering if there was another way to accomplish this? Thanks!
    This topic first appeared in the Spiceworks Community

    #1 Points to
    http://forums.verizon.com/t5/High-Speed-Internet-D​SL-and-Dial/Info-on-Deleting-FTP-files-from-Person​...
    #2 I would be nice, if users could remove their content without having to call Verizon.
    If you are the original poster (OP) and your issue is solved, please remember to click the "Solution?" button so that others can more easily find it. If anyone has been helpful to you, please show your appreciation by clicking the "Kudos" button.

Maybe you are looking for

  • Home Hub 3. Constant connectivity loss. Event log ...

    Trying to get any kind of service out of my BT Infinity provision nowadays is like trying to arrange a tsunami in a desert. Time after time after time after time, the Internet is working normally but then a page refuses to refresh and attempts to ope

  • Itunes cannot open beacuase it has detected a problem with audio configuration

    i have the same poblem my Itunes cannot open beacuase it has detected a problem with your aduio configuration i saw the answer to that question and i tryed it but my computer was already to those settings and my Itunes is still not working i realy ne

  • How to save datawindow as pdf on windows 7 64 bit

    Hello Everyone! Please help me. I am able to save datawindow as PDF on Win XP 32 bit by using SAVEAS function and installed ghost script 9.07. The same code does not work on the win 7 64 bit machine. The module which saves datawindow as PDF has been

  • Performance problem due to garbage collection?

    Hallo, we implemented a digital whiteboard with JavaFX: there is a scene (3200 x 2400 px) on which you can draw (insert paths) and add post-its (colored rectangles with paths or string a content). It's possible to drag the post-its around and it's al

  • How to display  the rank

    Hi, http://rapidshare.com/files/401755613/Kishore.jpg.html please download the file by using link. that is the snapshot of my requirement. I wand to display the rank based on the value seleted on the column selector The Rank should be applied based o