[Solved]using 'make oldconfig'

Hi, I'm running my own kernel (mainline) ,  when rebuild, I use this order:
sudo /usr/bin/modprobed-db recall
make localmodconfig
make oldconfig
make nconfig
Now I come to the conclusion, things ain't the way I think they supposed to be.
When F.I.  ISDN support was removed from the previous kernel, shouldn't this be left out using 'make oldconfig'?
Because every time I edit the config with nconfig, these things are still present, and I remove them , again.
Now, I'm sure this ain't how it's supposed to be , am I doing something wrong ?
Thanks qinohe
Last edited by qinohe (2014-07-23 12:05:04)

lolilolicon wrote:
Does this `modprobe` a list of modules? It's probably unnecessary, as you can:
make LSMOD=<insert path to file> localmodconfig
where file contains a list of modules, or is an executable that outputs such a list.
Ah, that's a nice one, gonna use it. Yes, it's Modprobed-db, a tool that keeps track off all modules ever used on the system, I could uninstall it, I know all modules, and can add one to the list myself if necessary.
Depends on what your ".config" file looks like at the time you run `make oldconfig`.
Probably not the edited config, but the new one.
Do you have the file ".config" in the directory when you run `make localmodconfig`? If so, did you remember to update the file from which this file is copied? Alternatively, you can bypass creating ".config" before `make localmodconfig`, in which case /proc/config.gz will be used as a base.
A new one is in 'src' but not in the the 'build dir. Should I copy the old config to it?
I need to create me a nice work flow for this, gonna try a few sequences, and check which is the best for me to use, thanks.
edit:adding this line before-  solved it all,
zcat /proc/config.gz > ~/builds/abs/core/linux/src/${_srcname}/.config
Thanks again lolilolicon
Last edited by qinohe (2014-07-23 12:05:55)

Similar Messages

  • What r the diff. error that can be solved using RSRV tcode?

    Hi ,
    what r the diff. error that can be solved using RSRV tcode?
    I want to know all the errors that can be solved using RSRV t code
    if any body is having good document regarding RSRV please send it to me at
    <u><b>[email protected]</b></u>
    Thanx in advance,
    ravi.

    Hi,
    Refer the below links for more details about RSRV TCODE.
    /community [original link is broken]
    http://help.sap.com/saphelp_nw04/helpdata/en/92/1d733b73a8f706e10000000a11402f/frameset.htm
    it's for bw objects consistency analysis and repair.
    from transaction code RSRV doc itself :
    Transaction RSRV: BW Data and Metadata Test and Repair Environment.
    Transaction RSRV checks the consistency of data stored in BW. It mostly examines the foreign key relationships between individual tables in the enhanced star schema of the BW system.
    The transaction interface was re-designed for SAP Portals release 3.0A. A brief guide about how to use the transaction follows.
    Starting the Transaction
    You can reach the test and repair environment
    By entering the transaction code RSRV
    From InfoObject maintenance (Transaction RSD1)
    By clicking on the "Analyze" button in the intial screen.
    After selecting a characteristic in the maintenance screen via the "Processing -> Analyze InfoObject" menu option.
    The Initial Screen
    When using the test and repair environment for the first time, the message "Values were not found for all setting parameters" draws your attention to the fact that there aren't any saved settings for your user.
    After confirming the dialog box, you reach the initial screen of the transaction. This is divided into two parts:
    1. On the left-hand side, you can see a tree structure with the pool of available tests.
    2. The right-hand side is empty at first. The tests you have selected will be put here. A selection of tests is called a Test Package here.
    Combined and Elementary Tests
    An Elementary Test is a test that cannot be divided into smaller tests and that can therefore only be executed as a whole.
    In this regard, a Combined Test determines which elementary tests are to be executed after entering the parameters. You can remove individual elementary tests from the test package before carrying out the actual test run, in order to reduce run time, for example.
    Combining a Test Package and Executing it.
    Firstly select one or more tests with drag&drop or by double-clicking. Each selected test appears as a closed folder in the view of your test package. (An exception is elementary tests without parameters: These do not appear as a folder). You can also drag a whole folder of tests from the test pool across to the right-hand screen area; all tests that are located in the hierarchical structure under this folder are then added to the test package. You can also display a short description of the test, if required. Do this right-clicking on a test and choosing "Description" from the context menu.
    Afterwards, you must supply the tests with parameters. Tests without parameters must not be given parameters. You are given notification of this when selecting them. You can enter parameters by double-clicking on a test (or a test package) or by expanding the folder of the test.
    A dialog box appears in which you must enter the required parameters. Input help is often available. After entering the parameters, a folder with the name "Parameter" is added under the test. This contains the parameter values. The test name can change in some circumstances, enabling you to see at first sight for which parameter values the test is to be executed. It is possible to select the same test several times and give it different parameters, which may even be preferable in some situations. When you have supplied the combined test with parameters, the folder with the name "Elementary Tests" is added under this one. It contains the elementary tests, from which the combined test is built. You can delete individual elementary tests in the test pool using drag & drop.
    After supplying all tests with parameters, you can start the test run by clicking on the "Execution" button. After execution, the test icons change from a gray rhombus to a red, yellow or green one, depending on whether the test had errors, warnings or was error-free.
    Test Results
    The test results are written to the application log. Depending on the settings, the system jumps automatically to this display, or you can reach it by clicking on the "Display" button. The results are saved in the database, and can therefore be compared later with additional test runs.
    In the left-hand side of the window, you can see an overview of the most recent test runs. Double-clicking on a folder displays all messages under these nodes as a flat (non-hierarchical) list in the right-hand screen area. Long texts or detail data may be available for individual messages, which can be displayed with a mouse click.
    Repairs
    Some tests can repair inconsistencies and errors. Automatic correction is generally not possible: If entries are missing from the SID table for a characteristic, in which case the lost SIDs are still being used in a dimension table (and the corresponding dimension keys are still being used in the fact table) of an InfoCube, you can only remove the inconsistency by reloading the transaction data of the InfoCube. Also note that you must make repairs in the correct sequence. You must always read the documentation for the test and have a good idea about how the error occured, before making the repairs.
    After executing the test run, go from the application log back to the initial screen to make these repairs. Click on the "Fix Errors" button to start an error run. Since the dataset could have changed between the test and the repair run, the required tests are executed again before the actual repair. The results can be found in the application log once again.
    Test Packages
    The test package is deleted if you do not save the test package in the display before leaving the test environment. Choose "Test Packages -> Save Test Package" in the option menu. You can do the following via options in the "Test Package" menu:
    Load packages
    Load for processing - the package is then locked against changes by others.
    Delete and
    Schedule execution at a later date or at regular intervals in background processing
    Settings
    In the "Settings" menu option, you can make settings (adjust the size of the screen areas, for example) and save them. The settings are automatically read when starting the test environment. Support packages are being delivered with additional settings options since the test environment is under development at the moment. A message notifies the user at the start if there aren't any values for the setting options.
    Jobs Menu Option
    You can access the job overview via the Jobs -> Job Overview menu. Use this when you want to check the status of a test package you have scheduled.
    Application Log Menu Option
    You can display old logs from previous test runs in the dialog box, as well as scheduled ones. The option of deleting test logs can also be found here.
    New Selection Button
    The currently selected test package is deleted when you press this button.
    Filter Button
    After a test run, click on this button to remove all elementary tests without errors or warnings from the test package.
    Executing Test Packages in Process Chains
    You can add a process chain to the ABAP Programm RSRV_JOB_RUNNER in the process chain maintenance transaction, RSPC. To do this, use drag & drop to select the process type "ABAP Program" under "General Services" in the process type view. When maintaining process variants you will be asked for the program name and a program variant. Enter RSRV_JOB_RUNNER for the program name. Choose a program variant name and click on "Change". In the next screen you are able to either change or display an already existing variant, or create a new variant. When creating a new variant you will be asked for the following: Package name (an imput help on this is available), the detail level for the log to which the RSRV log in the process chain log is to be integrated, and a message type severity at which process chain processing should be terminated.
    The RSRV process log in the process chain is built as follows:
    First is a summary specifying whether errors, warnings, or no errors occurred for each elementary test.
    A log view of the RSRV test package at the specified detail level follows.
    Example: If you choose the value '3' for the detail level, only messages up to and including detail level 3 will be included in the log processes for the process chain. Messages occuring at a lower layer of the test package test are not displayed in this log. Please note that, unlike the application log, the process log does not propagate errors from deep detail levels to low detail levels. For example, if a single detail level 4 error occurs the summary will show that the relevant test delivered an error. However, this error will not be listed in the second part of the log.
    A complete log is always written independantly of the RSRV process log in the process chain. You can view this in the menu option "Application Log->Display Log->From Batch".
    Please note that there is currently no transport object for test packages and that consequently these cannot be transported. Process chains that execute RSRV test packages must therefore be manually postprocessed after a transport to a different system: The relevant test packages must be created.
    Hope This Helps,
    This is already there in SDN.
    Regards,
    rik

  • After "make oldconfig" missing many drivers

    I´m building kernel packages this way: I take the pkgbuild from 2.6.14 and change it to version 2.6.15RC5. I copy the config from the old kernel into the folder for the new package. Then I start the build. When the new sources are unpacked  I break it and cd to src/linux2614. There I remove all the config and .config files.
    Then I copy the old config file there and run "make oldconfig". Only a few changes to make. But after that the size of the config drops from 49KB to 25kb and when I do a diff I see lots of drivers missing and many are changed from "y" or "m" to "n".
    Do I something wrong and what way use our kernel maintainers :?:
    AndyRTR

    tomk wrote:
    In the official PKGBUILD, change this line:
    yes "" | make config
    to
    make oldconfig || return 1
    When you run makepkg now, it will go automatically into oldconfig, and continue the build when you're finished.
    enabling make oldconfig in the PKGBUILD file doesn't seem to give me chance to move my old .config file to the source directory.  I achieved the same by ctrl-z to pause makepkg and then copying the my .config flie to the appropriate directory.  Surely there is an easier way to configure with  an old .config and makepkg.  This whole process seems a lot more complicted than using the old /usr/src method.  There it is just a question of
    make old_config
    make && make modules_install
    Coulnd't be easier.

  • Install kernel using "make install"

    There has been some discussion of the proper way to compile a custom kernel for arch lately on the forums. I've been doing some research lately of options of doing this. Also, in the current wiki entry, it mentions not to use make install because it runs lilo. Following I'll outline a differnt way to do it. If others think it is of value, then i'll put it into the wiki.
    add the following script into /sbin/installkernel
    #!/bin/sh
    # Arguments:
    # $1 - kernel version
    # $2 - kernel image file
    # $3 - kernel map file
    # $4 - default install path (blank if root directory)
    # define input variables
    # assume default install path is /boot
    # install_path=$4
    kver=$1
    kimg=$2
    kmap=$3
    vmlinuz=/boot/vmlinuz-$kver
    sysmap=/boot/System.map-$kver
    kconfig=/boot/kconfig-$kver
    # check for previous versions
    if [ -f $vmlinuz ]; then cp $vmlinuz $vmlinuz.old; fi
    if [ -f $sysmap ]; then cp $sysmap $sysmap.old; fi
    if [ -f $kconfig ]; then cp $kconfig $kconfig.old; fi
    # install the image and map files
    cat $kimg > $vmlinuz
    cp $kmap $sysmap
    cp $(dirname "$kmap")/.config $kconfig
    # Check for grub, then lilo
    grub_menu=/boot/grub/menu.lst
    grub_update_conf=/etc/grub-update.conf
    if [ -f $grub_menu ] && [ -f $grub_update_conf ]; then
    source $grub_update_conf
    # don't update it if it already exists
    if [ 0 -eq $(grep -sc "vmlinuz-$kver" $grub_menu) ]; then
    cp $grub_menu $grub_menu.bck
    # off course we'll want the newly compiles one to be the default
    default=$(grep -sc "^title" $grub_menu)
    sed -i "s/^default *[0-9]/default $default/" $grub_menu
    echo >> $grub_menu
    echo "# Autogenerated by kernel $kver" >> $grub_menu
    echo "title linux $kver" >> $grub_menu
    echo "root $grub_root_loc" >> $grub_menu
    echo "kernel $grub_boot_loc/vmlinuz-$kver $kopts" >> $grub_menu
    echo >> $grub_menu
    fi
    elif [ -x /sbin/lilo ]; then
    /sbin/lilo -v
    fi
    Add the following to /etc/grub-update.conf if you use grub. Edit the options for your system.
    # grub boot location
    grub_boot_loc="(hd0,1)"
    # grub root location
    grub_root_loc="(hd0,3)"
    # kernel startup options
    kopts="root=/dev/discs/disc0/part4 ro vga=838"
    1.) extract kernel source in directory that is not /usr/src.
    2.) su - root
    3.) mv /path/to/linux-2.6.x to /usr/src/linux-2.6.x-[custom] where custom is whatever you want to use as an extension, for example I use "-wfd".
    4.) Edit Makefile in 2.6.x-custom. Change EXTRAVERSION= to EXTRAVERSION=[custom]
    6.) optionally copy the /boot/kconfig26 to /usr/src/linux-2.6.x-custom/.config
    7.) make xconfig
    8.) make bzImage
    9.) make modules && make modules_install
    10.) make install
    This will not alter any of the infrastructure for the current working kernel (unless you have a current 2.6.x-[custom] installed). That way if you screwed up the new kernel, you can boot into the previous one. Make install will call the /sbin/installkernel script if it exists, ant not the /path/to/source/linux-2.6.x-[custome]/arch/i386/boot/install.sh script. This script is what updates lilo even if your using grub. Also if you copied the /etc/grup-update.conf, the grub menu will be updated with an entry for your new kernel.
    If you use grub, it is best, IMHO, to uninstall lilo. That way lilo doesn't accidently write over the boot sector if you screwed up the process by putting the installkernel script in the wrong place.
    This process was "loosly" inspired by tools in some debian packages.
    Please let me know what you think.
    -wd

    #!/bin/sh
    # Arguments:
    # $1 - kernel version
    # $2 - kernel image file
    # $3 - kernel map file
    # $4 - default install path (blank if root directory)
    # define input variables
    # assume default install path is /boot
    if [ -z $4 ]; then
    install_path=$4;
    else
    install_path=/boot;
    fi
    kver=$1
    kimg=$2
    kmap=$3
    vmlinuz=$install_path/vmlinuz-$kver
    sysmap=$install_path/System.map-$kver
    kconfig=$install_path/kconfig-$kver
    # check for previous versions
    if [ -f $vmlinuz ]; then cp $vmlinuz $vmlinuz.old; fi
    if [ -f $sysmap ]; then cp $sysmap $sysmap.old; fi
    if [ -f $kconfig ]; then cp $kconfig $kconfig.old; fi
    # install the image and map files
    cat $kimg > $vmlinuz
    cp $kmap $sysmap
    cp $(dirname "$kmap")/.config $kconfig
    # Check for grub, then lilo
    # In the Grub menu you can use two menu entries, one pointing to
    # kernel-newest and one to kernel-previous.
    if [ -d $install_path/grub ]; then
    mv $install_path/kernel-newest $install_path/kernel-previous
    ln -s $vmlinuz $install_path/kernel-newest
    elif [ -x /sbin/lilo ]; then
    /sbin/lilo -v
    fi
    I didn't test it, but it should work. The move will fail if there is no link kernel-newest, but that doesn't really matter.

  • [SOLVED] Is it necessary to use `make $MAKEFLAGS' with a PKGBUILD?

    I'm sorry if this question has been asked many times before, but I'm curious to know if it is necessary to run `make $MAKEFLAGS' when compiling a PKGBUILD; the C{,XX}FLAGS specified in my rc.conf seem to be used when I compile packages, but is it necessary to specify `make $MAKEFLAGS' rather than just `make' in order to utilize the flags I've placed in the config file?
    Last edited by deltaecho (2008-12-16 05:36:09)

    Snowman wrote:No, it's not necessary. BTW, if you use custom C{,XX}FLAGS, you should put them in /etc/makepkg.conf, not rc.conf.
      That's what I meant 
    Thanks!

  • [SOLVED]using dd to make image backup of logical volume?

    I recently got my Ocz Revodrive working under Archlinux by creating logical volumes on it which worked out great, i also got openbox configured to my liking but i want to make an image back up of my Revodrives logical volumes into image files to restore for later use.  I was reading the disk cloning wiki about the dd command and just wanted to double check that the dd command below would work ok?
    Once i boot into the Arch live usb, and run the following commands it should create an image file on the hard drive mounted under /mnt/backupdata ?  Sorry for the straight forward question, i just want to make sure i have the right command before i back up data i dont have a copy of.
    dd if=/dev/mapper/VolGroupArray-lvroot of=/mnt/backupdata/test.img bs=16m
    dd if=/dev/mapper/VolGroupArray-lvhome of=/mnt/backupdata/test.img bs=16m
    Also anyone have any feedback on what blocksize to use for backing up to a standard hard drive, is 16m okay?
    Last edited by itzmeluigi (2013-08-05 06:01:54)

    Oops heh, i would usually rename it to test1.img and test2.img, i was using it as an example.  Once i create the images, would the commands below be the proper way to restore them?
    dd if=/mnt/backupdata/test1.img of=/dev/mapper/VolGroupArray-lvroot bs=16m
    dd if=/mnt/backupdata/test2.img of=/dev/mapper/VolGroupArray-lvhome bs=16m
    Thanks
    Edit: All the commands ended up working perfectly i had to change the lower case m to bs=16M.  Once i did that the image files were created and worked just fine
    Last edited by itzmeluigi (2013-08-05 06:01:30)

  • [SOLVED] Using mod_rewrite to make some http pages to https

    I would like to have this url:
    http://myhost.com/index.php?option=com_ … &Itemid=23
    to become the https one.
    This far I have tried this rule:
    RewriteEngine On
    RewriteCond %{REQUEST_URI}  ".*com_weblinks.*"
    RewriteRule ^(.*) https://%{SERVER_NAME}%{REQUEST_URI} [R,L]
    Someting is wrong with my voodoo here, but I don't know what...

    go by https if its: www.example.com/test?foobar
    # Activate the Rewrite engine
    RewriteEngine On
    # You only want to go by https if the QueryString CONTAINS foobar (after a ? in the URL)
    RewriteCond %{QUERY_STRING} .*foobar.*
    # he requestet /test .. so you can match on /test ... then y EXTERNAL REDIRECT him to you self but using https this time ... the Query_string automagically gets apended
    RewriteRule ^/test$ https://www.example.com/test [R,L]
    if you also want to change the query string you have to use a second RewriteRule with [QSA] ... QueryStringAppend ...
    mfg

  • [SOLVED] Using gdm, how to make commands run when x starts?

    Hello World!
    I am running arch on a Thinkpad T400, and would love to be able to scroll using the middle button and the trackpoint. I found this article: http://www.thinkwiki.org/wiki/How_to_co … TrackPoint
    and it says to insert some commands into .xsessionrc. The thing is, I'm not sure where .xsessionrc is, or if arch even uses .xsessionrc. (It said it would depend on the distribution.) I tried putting it in .xinitrc, but apparently that only runs with the command "startx", but I'm using a display manager. (gdm). In this thread:
    https://bbs.archlinux.org/viewtopic.php?id=85714
    the person was told to put the script in .kdm, since he was using that display manager. Is there a similar place for gdm where these commands will run at startup?
    If anyone is curious, here are the commands I need to run:
    xinput set-prop "TPPS/2 IBM TrackPoint" "Evdev Wheel Emulation" 1
    xinput set-prop "TPPS/2 IBM TrackPoint" "Evdev Wheel Emulation Button" 2
    xinput set-prop "TPPS/2 IBM TrackPoint" "Evdev Wheel Emulation Timeout" 200
    xinput set-prop "TPPS/2 IBM TrackPoint" "Evdev Wheel Emulation Axes" 6 7 4 5
    Any thoughts, hints, suggestions, or solutions would be appreciated.
    Last edited by TheGuyWithTheFace (2013-07-16 23:22:39)

    You should copy the global file to the location cookies indicated, and then modify the local copy.
    https://wiki.archlinux.org/index.php/Aw … ation_file
    Arch wiki wrote:Whenever compiled, awesome will attempt to use whatever custom settings are contained in ~/.config/awesome/rc.lua. This file is not created by default, so we must copy the template file first:
    It's a good idea to leave that template config file alone (the one in /etc/xdg/awesome), so that when you break your local config (and you will, trust me) you have the template file as a backup.
    You might want to read the wiki page for awesome, and also consult the awesome wiki.
    Last edited by 2ManyDogs (2013-07-16 23:39:56)

  • PL-SQL Solve: Using INSTR and SUBSTR

    I am trying to work on this and cannot get a solution. Please help
    You have to use INSTR and SUBSTR to solve
    Question:
    You have the following acceptable value
    Numberic: 0-34
    80-100
    or Non Numberic X S U D- D D+
    Im have to use INSTR and SUBSTR functions to test that the value is a valid (as above) number before TO_NUMBER is called:
    SELECT TO_NUMBER('?? ') //HERE ?? and a space (for 100 etc) is for the values as above
    FROM DUAL
    WHERE ....INSTR(......)<=;
    (Hence if the number is true then number comes back or it says no rows)
    and also id non numberic it should also be tested.
    I am completely unsure about it but tried this
    SELECT TO_NUMBER('34 ')
    FROM DUAL
    WHERE INSTR('0123456789',1,1)<=9 (looking at first number ?)
    AND
    INSTR('0123456789',2,2)<=9
    AND
    INSTR('0123456789',3,3)=0;
    Please help

    We have the following value that we can use:
    Numeric: 0-34 and 80-100 only
    or Non Numberic X S U D- D D+
    Have to use INSTR and SUBSTR functions to test that the value is a valid
    (for now only trying to create a function which can later be put into a procedure.)
    SELECT TO_NUMBER('12 ') //e.g HERE 12 and a space for the values as above
    FROM DUAL
    the where clause looks at all three spaces to make sure values are correct (given number or non-numberic values only)
    (Hence if the number is true then number comes back (meaning true)
    or it says NO rows)
    If value is non numeric, test it to allow non numberic also.
    I am completely unsure about it but tried this
    SELECT TO_NUMBER('34 ')
    FROM DUAL
    WHERE INSTR('0123456789',1,1)<=9 (looking at first number ?)
    AND
    INSTR('0123456789',2,2)<=9
    AND
    INSTR('0123456789',3,3)=0;
    Something like this has to be done.....subst (instr, x,x,) i think mite help.

  • [SOLVED] Makepkg makes qmake or make act weird and causes trouble

    I am trying to build an Arch package from a Qt app. The app is split into some libraries that are compiled with the main application. Installation instruction for the whole app are defined in src.pro (see the hierarchy below), which are then written into the Makefiles that are generated on-the-fly during the build process. When I run 'qmake && make && make install' manually, everything goes fine. But when I try to build the package using a PKGBUILD, for some reason the Makefile generated based on the src.pro is missing installation instruction for the libraries (lib1, lib2, etc.).
    I don't really have any idea of what is the problem. I tried to study the output of the make process and it seems like that when running makepkg, qmake first runs on each subdirectory to generate the Makefiles and only then begins the actual build process. Manually running make first builds one directory, then the second one and so on and only generates a Makefile when entering a directory. Since the libraries marked for installation do not exist in the beginning of the build process, they are not put into the Makefile at all, I think.
    My first idea for a solution was to move the installation instructions into the libraries' own .pro files but that didn't seem to change anything. The build() process in the makepkg only runs qmake, make and make install - what I also do manually.
    The directory structure of the project is like this:
    project/
        project.pro
        src/
            src.pro
            lib1/
                lib1.pro
            lib2/
                lib2.pro
    Last edited by Verge (2011-10-07 19:07:33)

    After many hours I finally came up with a solution that was too easy... I only needed an "install.pri" file in which I added two lines:
    target.path = /path/
    INSTALLS += target
    Next I just included that file into each lib's .pro file and now the makepkg process works. "target" is somewhat a magical variable for qmake.
    I consider this solved.
    Last edited by Verge (2011-10-07 19:08:22)

  • [Solved] Using Testdisk to recover disappeared files [exFAT]

    Hi,
    I have a 3TB external HDD, a WD MyBook, consisting of a single exFAT partition. It was intended for files that we rarely need; exFAT was chosen so all computers in our Linux/Win/Mac household could use it directly. I bought the drive 8 months ago and pretty much filled it up at the beginning with software & movies (nothing mission-critical, but nevertheless stuff I wanted to keep); that is, apart from this initial bulk write it has only seen very occasional use (like once a month).
    Today I reattached the drive for the first time in over a month, to the Arch machine it’s used with 90% of the time. It mounted just fine and I successfully retrieved a file that I needed, but I noted that a folder appeared empty.
    ls: reading directory .: Input/output error
    Now, this folder contained 2 TB! The disk usage read as if the data was still present, i.e., the disk appeared to be almost full.
    I attach the drive to my Mac and run both a quick and full SMART test (using Western Digital Drive Utility), and verify the partition table (Mac OS Disk Utility), all three are passed – no errors; but again the folder appears empty.
    So I figure maybe Microsoft can handle this exFAT stuff and attach it to a Windows machine (told you it’s a multi-platform household ). It immediately “encounters a problem” with the disk and offers to repair it. I accept and it quickly supposedly succeeds – but still the files are missing; the only difference: the “phantom” disk usage of the lost files is gone, the disk is now reported to be fairly empty.
    So what do I do now? I’ve looked at the file recovery wiki entry, and if no other solution emerges, I’ll try my luck with PhotoRec. However, I feel like I’m in a somewhat different situation: As far as I can tell, the HDD is in good physical condition, has no file system problems, I didn’t accidentally delete the files or format the disk, etc. I have a feeling the data isn’t damaged, but simply not visible.
    More than anything I’m wondering: Why the heck did my data disappear? Any suggestions other than PhotoRec? Also, should I now distrust the drive, even get rid of it? It’s less than a year old.
    Thanks for any help, it’s much appreciated.
    [Update]
    Okay, so my working hypothesis is that the Mac OS exFAT driver is at fault and corrupted the partition table (this apparently happens a lot). I’ve mounted the drive read-only and am using testdisk – which has already found two data partitions. I know nothing about testdisk, but maybe it can recover the correct table…? I’ll update as I go along.
    [Update 2]
    Yes! Testdisk is able to list some if not all of the lost files! I’m following the TestDisk step-by-step guide to try to recover the files. (Feel free to chime in with advice; I’ve updated the title accordingly.)
    [Update 3]
    Looks like Testdisk is recovering the files just fine! I’ll mark this as solved. Lessons learned: Testdisk is great; exFAT sucks. (… And it’s 2015 and there is still no file system with solid out-of-the-box cross-platform support.)
    Last edited by tjanson (2015-01-18 17:28:02)

    Dear Customer,
    Welcome and Thank you for posting your query on HP Support Forum
    It looks like you are not able to boot into the Operating System of your Notebook
    Please Contact HP if your HP Notebook is under warranty, HP would replace the Hard Disk Drive and provide you the
    Recovery media to restore factory operating system after replacement (if you've not yet created Recovery Discs/USB Media)
    According to the message you have posted it shows that you are getting "Short DST Failure", then it's time to replace the Hard Disk Drive with a new one
    Back up all the personal data to an external drive if it's possible. Otherwise you could connect the failed HDD via SATA to USB adapter with another PC or same PC after replacement of HDD & re-installation of OS and try copying /recovering the files.
    I strongly recommend you to immediately HP Support over the Phone for further assistance without any delay to get your Notebook diagnosed and serviced by an authorized HP Certified Engineer
    You can Check your warranty Here to verify the status and Click Here to order a new Hard Drive
    Hope this helps, for any further queries reply to the post and feel free to join us again
    **Click the KUDOS star on left to say Thanks**
    Make it easier for other people to find solutions by marking a Reply 'Accept as Solution' if it solves your problem.
    Thank You,
    K N R K
    Although I am an HP employee, I am speaking for myself and not for HP

  • [SOLVED] Use the nouveafb as framebuffer and the nv driver for Xorg

    Hi! I'm using Arch on a JAMMA Arcade cabinet (one of those old arcade cabinet) so i need some strange video resolution to suit my needs.
    The best option is a 640x240 @ 60Hz vertical and 15Khz horizontal. Arcade screens must have an horizontal frequency of 15Khz or I'll fry my screen! Luckily i use a card called J-PAC that cut off the video signal if is not 15Khz.
    With Xorg there's no problem if i use the nv driver, using this Modeline do the trick:
    # 640x240x60.00 @ 15.240kHz Perfect screen mode for Xorg
    Modeline "640x240x60.00"  12.192000  640 656 720 800  240 244 248 254  -HSync +VSync
    I've found this modeline using a little program called umc: http://aur.archlinux.org/packages.php?ID=27550
    Instead Xorg crash randomly if I use the same modeline with the nouveau driver, sometimes at startup, other times during the Xorg session. I use some "no edid" options that let me use strange video resolutions but i don't know if this is bothering the nouveau driver or not, surely it isn't bothering the old dear nv driver.
    BUT WAIT!!!!
    The nv driver can be loaded only if the nouveau driver is not loaded!
    So if I use the nv driver for Xorg i lose the nouveafb, and the nouveafb is the only one that works for me 'cause i can't force a non standard video resolution like 640x240@60hz on vesa and uvesafb! At least this is what i've learned from the docs, tell me if i'm wrong: http://git.kernel.org/?p=linux/kernel/g … vesafb.txt
    For now the only thing that make the framebuffer works at 640x240 is this: video=DVI-I-1:640x240@60eMm but this kernel command works only with KMS and the nouveafb.
    So my question is, can i use the nouveau driver for framebuffer only ? and the NV driver for Xorg ?
    Or there is another approach ?
    I remind you that i need a resolution of 640x240 both on the framebuffer and on Xorg, i'm using an old GeForce4 Ti4600 that is supported by nouveau (chip NV20).
    Thank for your patience and your time. 
    Bye for now.
    Last edited by marcs (2011-06-28 12:05:13)

    Hi
    It seems (well for now all is stable)... that now I got a more stable situation loading the nouveau module directly from initramfs.
    I'm guessing that now this is the solution, is the only thing i've changed, now fbsplash works also on start. (yay!)
    For solution documenting, you can do the same doing like this:
    1. edit the /etc/mkinitcpio.conf file with nano, vim or whatever.
    2. Add at the MODULES="" the nouveau module like this: MODULES="nouveau"
    3. Add the modprobe.conf file to the FILES="" line, like this: FILES="/etc/modprode.d/modprobe.conf"
    4. OPTIONAL: If you are using fbsplash like me you must add at the HOOKS also the fbsplash script like this: HOOKS="base udev fbsplash ..." : look here: https://wiki.archlinux.org/index.php/Fbsplash
    5. OPTIONAL: On an arcade cabinet the best text font must be tiniest possible like an 8x8. I'm using cp865-8x8 wich is much better than Agafari-16 that is good for high resolutions. To use this console font as soon as possible you can add also consolefont to the HOOKS like, like this: HOOKS="...[stuff before] consolefont"; look here: https://wiki.archlinux.org/index.php/Fo … fault_font.
    6. Then you must remake the initramfs images doing this: mkinitcpio -p kernel26 (or whatever kernel you wan't to use as a model, i'm using kernel26-fbcondecor).
    To use fbcondecor, you can find the kernel26-fbcondecor package from AUR: http://aur.archlinux.org/packages.php?ID=15603 (please support this package)
    Xorg seems very stable for now. I'm using the nouveafb framebuffer with fbsplash on an old arcade cabinet and that's some fine eye candy man!
    PROBLEM SOLVED!
    BUT WAIT!!!
    I have to know now what's the difference between loading nouveau from initramfs... I don't know if the problem is related to nouveau or KMS or fbcondecor, or fbsplash, or the kernel I'm not so good to know that.
    Now can any kernel guru tell me what is the basic difference by loading a module from initramfs instead making the kernel load it, a part the fact that is loaded a bit earlier from initramfs. There are also difference between memory areas ?
    I'll send a bug report and how to reproduce it, but i guess that my configuration is a bit out of the mainstream...
    Thank to anyone that have taken a look at this post.

  • [SOLVED] Using a NETGEAR WN111v2 USB network adapter with Arch Linux

    Hello!
    I just recently bought the adapter mentioned in the subject, and hoped to get it working with my lovely Arch Linux OS. (I had read somewhere online that it should work some way or another; ndiswrapper or a kernel driver). However, I've yet to get it working. I tried the tips I found:
    Here:http://ubuntuforums.org/showthread.php? … ht=WN111v2
    which links to here: http://ubuntuforums.org/showthread.php?t=885520
    Basically, it tells me that I should use ndiswrapper with the arusb_xp drivers provided by NETGEAR. So I place the three files arusb_xp.inf, arusb_xp.sys and arusb_xp.cab in a folder, and run:
    sudo ndiswrapper -i arusb_xp.inf
    ndiswrapper -l
    arusb_xp : driver installed
    device (0846:9001) present
    sudo ndiswrapper -m
    sudo modprobe ndiswrapper
    This should install my drivers, add an alias in modprobe.d/ndiswrapper saying "alias wlan0 ndiswrapper" and load ndiswrapper in itself... right?
    But after a iwconfig I can still see only eth0 and lo, no wireless interfaces at all. I checked lsmod, but couldn't find any conflicting drivers loaded.
    Anybody got an idea why it worked for the people with Ubuntu and not me? Any and all help greatly appreciated!
    Cheers
    EDIT: For some magical reason, the drivers that came with my adapter did NOT work, while the drivers on the second link, named the same, DID. I have not inspected how they differ, but luckily, they work. Yey!
    Last edited by mariusmeyer (2009-05-07 08:52:21)

    i have the same computer and the same os, and i want to do the same thing. have you figured out if this works yet?
    i was told that the new airport extreme cards wont work in older computers. and that id have to find an older airport card on ebay because they dont make them anymore
    Message was edited by: xacharias

  • *PROBLEM SOLVED: Using "java" command in JSDK1.4.1

    I had a problem before when using the java command for JSDK1.4.1
    What I just realised/found was that there different situations
    for using the "java" command. I will give you some tips in
    case you are running in the same problem or may be to help
    another person:
    RUNNIN FILES (.java) THAT ARE "NOT" PART OF A PACKAGE
    1)Let's say you have a "test.java" file in "C:\APP\SUB\"
    2)You can compile it from iside or outside the folder that
    holds the "test.java" (i.e. SUB)
    3)After compiled, you can ONLY run the "test.class" file
    from within the folder that holds the CLASS file i.e. "SUB"
    You can not be at the "APP" folder and type:
    "java SUB.test" NOR "java SUB/test"
    RUNNING FILES (.JAVA) THAT ARE PART OF A PACKAGE
    1)Let's say you have a "test.java" file in "C:\APP\SUB\COM\"
    2)Now let's say that the fiel includes the "package SUB.COM;" line
    in the code, which makes it part of a package.
    COMPILING
    3)If the "test.java" file does NOT use another file in the same
    folder, you can compile it from where ever you want. It could
    be from within inside OR outside the folder holding the file
    4)If the "test.java" file DOES use another file in the same folder,
    then you have to go "javac" it from any folder aoutside the
    folder that is hosting (holding) the "test.java" file
    EXMAPLE:
    If am in folder "APP" I can do this: "javac SUB/COM/test.java"
    Or if am in "SUB" I can do: "javac COM/test.java"
    BUT if Im in "COM" I can NOT do: "javac test.java"
    RUNNING
    5)Make sure that you have produced a "test.class" file when compiling
    6)Recall that the "test.java" file includes the line "package SUB.COM;"
    which means that it is part of this package. and that it is stored
    inside "COM"
    7)You can ONLY run the "test.class" file from the FIRST folder above
    the package where that the "test.class" file is forming part of
    i.e from inside the "APP" folder, You CAN NOT run the file from
    anywhere else but the first folder (APP) above the package (SUB.COM)
    8)EXAMPLE:
    To run the "test.class" I must be in: "C:\APP"
    I can NOT be in "C:\" nor "C:\APP\SUB" nor "C:\APP\SUB\COM"
    Then type in the command window: "java SUB.COM.test" and..
    *** ESO ES TODO AMIGOS!! **** (THAT'S ALL FOLKS)
    I hope this help to other people.. some factors like enviromental
    variable may change what I just stated though.
    In my computer runnning "Windows 2000" my variables are:
    1)JAVA_HOME: C:\j2sdk1.4.1
    2)path: <other paths>;C:\j2sdk1.4.1\bin;
    THANK U FOR ALL THE PEOPLE WHO WERE TRYING TO HELP ME SINCE YESTERDAY
    my email: [email protected]

    Yeah, he did get a lot of it wrong but oh well, at least he's putting forth some effort. If you need any help alex, just send me a note. Everyone should also look into using Jakarta Ant for even your simple applications, it makes all these little directory issues go away real fast. If you need a generic build.xml file that has nice features, again, just send me a note.
    -Spinoza

  • [SOLVED] Using 32-Bit OpenGL Applications on a 64-Bit System...

    I enabled multilib so I can install some 32-bit programs I like to use, and I'm having an issue with OpenGL.  I decided to test it with ZSNES, and here's what I get in the terminal when I switch it to a video mode that uses OpenGL after it crashes...
    libGL error: failed to load driver: i965
    libGL error: Try again with LIBGL_DEBUG=verbose for more details.
    zsh: segmentation fault (core dumped) zsnes
    I just built this computer and it's the first time I've used a 64-bit system, so I'm not sure what I need to do to solve this.  I imagine that I need to install 32-bit drivers, but I don't know what I need to install exactly and I don't know if there's any specific way I need to go about it.
    Last edited by rzrscm (2013-04-30 09:56:01)

    David Batson wrote:Have you installed libva-intel-driver that provides i965 ?
    Yes, it's installed...Not sure if I have to configure anything for it.  My video card is the integrated GMA x4500 if that helps with solving the problem.

Maybe you are looking for

  • How do I authorize an iTunes account to re-download music on my new iMac?

    I have had to get a new mac and i would like to re-download my purchased music onto it. Please could someone tell me how to do this? Thanks.

  • Delivery date is not coming properly in PO Printout

    Hi, I have a problem in PO Printout that Delivery date is not coming properlyu2026In my PO, there are 5 line items.  If the Delivery date of all line items are same like 01.04.2009, then Delivery date should be shown 01.04.2009 in PO printout.. IT IS

  • Trying to underline the rectangle of a Text Field

    I am trying to underline the rectangle of a Text Field with a Line Annotation, but I am a little confused. The first doubt is about the "Square" vs, the "Rectangle" annotations. What is the difference between them? Has Adobe changed those recently? I

  • Question on multi-node R12 install

    I assume that for a Multi-node install of R12, we have to run rapidwiz on nodeA and choose "Install Oracle Application release 12.1.1" , choose the appropriate services for this node ex: "Batch Processing" then once the install is done, copy the conf

  • Adobe Premiere Elements 9 - Black Border

    My videos are boxed around with black. Like the video is in the center and around the video is just blackness. I don't know how to make it fullscreen. Can you help? I have Adobe Premiere Elements 9 and can't seem to get the correct resolution. I am a