Variable scope within an interface and some questions

I am creating a DataStorage interface with a member variable (can I call it a member variable when it is in an interface?) called dataObject. From what I remember, and what the compiler is telling me, the scope declarations "private," or "protected" can not be used on either methods or member variables in an interface.
Now I have another class DataInput that implements DataStorage and I want to make the dataObject private and put in accessor methods to get to the dataObject. But I also remember reading something that said, during inheritance you can not make the scope of the method (and variable?) more narrow. Meaning if a method was defined as default in the parent class, the subclass can not override that method to give it private access all of a sudden.
Now a couple of questions with this...does that rule only apply to subclassing (that is with extends) or does it also apply to implementing interfaces. Second question, I am pretty sure that rule applies to methods that are being overriden, but does it also apply to variables in an interface...because if that is the case, I can't re-declare the variable in the class that implements the interface to be private...and then there is no way to protect that variable.
By the way I did re-declare the variable in the subclass (implementing class) and declared it as a private, and it works...but I'm still trying to figure out the general rule.

well if the interface variables are implicitly public static final, how come I am able to re-declare the variable name within the class that implemented the interface.
If for example in interface Car there is a variable color, how come in class Honda implements Car, I can redeclare the variable color, and give it public access. I thought final meant final, no redeclaring...but does that not hold for static variables?

Similar Messages

  • For the new iPhone 5s - is there not an option anymore to edit imessages without deleting the whole stream?  There are some things I need to keep within a stream and some not necessary, and it looks as like no editing within a conversation?

    For the new iPhone 5s - is there not an option anymore to edit imessages without deleting the whole stream?  There are some things I need to keep within a stream and some not necessary, and it looks as like no editing within a conversation?

    Now I also know about pulling the message to the left to see the time stamp.  Good to know, although I preferred it being right where I could see it without having to do a swipe or keystroke. 

  • Isight megapixels and some questions

    first off, i want to know how many megapixels are in the isight camera in the Macbook 2.4 ghz version (i think they are all the same but wanted to be specific) i might have missed this while reading up on it but answers would be appreciated. also
    -do you consider the camera good?
    -(kinda silly and dumb question but i like to be sure) can AIM user see me using it in ichat?

    Hello optar
    (1) I believe that new MacBook (other than the MacBook Air) have 1.3MP. To be CERTAIN, or to test whether a pre-owned MacBook has the 1.3MP camera or the older 0.3MP version, use the test explained in this page.
    (2) I consider either version of iSight a fine webcam. Either works well for video chat, snapshots for your system ID use, and even small, email-sized pictures. When you want prints or large, high-quality snapshots or movie clips, you will see much better images from your digital still camera or camcorder.
    (3) When you get your Mac, searching its iChat > Help for "Chatting with AIM buddies" (without the quotation marks) will assure you that iChat can connect with AIM. However, this search of the Mac OS X v10.5 Leopard > iChat and iChat Sharing Forum included the following post that tells of significant audio problems with AIM at present time:
      http://discussions.apple.com/thread.jspa?messageID=6393396&#6393396
    The "Help" built into Macs really is helpful. It should be your first stop for most questions. Using your iChat > Help rather than only using the web help pages has the advantage of showing many related topics such as "About video chatting with AIM buddies. Don't be confused by the statement in this subject that says: "AIM does not support audio chatting..." That phrase means that a Windows PC using the AIM messenger software for Windows cannot do an audio only chat like iChat users can when using iChat between Macs. However, AIM does offer audio as part of its video connection, so, barring the problems noted above, you can both see and hear each other when you use iChat to video chat with a Windows PC user.
    If you want to learn more about iChat, see iChat Support. Start with the "Mac 101: iChat" link. Other Mac 101 topics may also be of interest.
    Note that the "Discussions forums" link in iChat Support is for the older versions of iChat. Unless you are considering a Mac that uses Mac OS X 10.4 or earlier, you will want the iChat and iChat Sharing Forum. This forum covers the new iChat version that works with Leopard (Mac OS X v10.5).
    EZ Jim
    PowerBook 1.67 GHz w/Mac OS X (10.4.11) G5 DP 1.8 w/Mac OS X (10.5.2)  External iSight

  • Fresh Install of iTunes and some Questions

    I just did a fresh install, set up home sharing, and did an iCloud match so my whole library is in the cloud.  I have my library file stored locally but my music files on my NAS.  Here are my questions:
    1.  I want to install iTunes on my wifes computer, set up iTunes match for her, set up shaing.  She will have a "shell" setup where notjing is stored as she is sharing with me and accessing the cloud.  Do I need to set up that computer under my user id even though she has her own?  I figure thats the only way to get her to iTunes match that I have?
    2.  How do I do this all?  Meaning, I will have a local library file for her but I need to point her laptop to the NAS with the files on them.  If I change her iTunes media file to that location on the NAS and check keep organized, will it screw up the files I have there?  I noticed when I did that on my PC it moved some fles around and I had to reassociate lost files to songs.  Took hurs.  Would iTunes have done that automatically for me somehow?  Should I not check the keep organized for her as she will not be adding music anyhow?  What is the harm either way?
    3.  How do I see my pictures on my ATV unit?  I thought I used to be able to but don't see an "app" on the ATV to do so, just photo stream which I do not want to do.  I thought I used to be able to access the pictures stored on my NAS?  Am I dreaming this or did it go away?  My NAS is an iTunes server and I swear I used to be able to access them from my ATV?    All I see on my ATV under my library is music, podcasts and movies. 
    4.  I have my NAS sharing (home sharing) and I can see it on my laptop with iTuenes installed.  When I click on it I see only music, no movies or anything else, why is that and is it something to do with home sharing?  But, like I said I can see thse movies on ATV under my library.  I can also see my playlists on the shared on my laptop but nothing is in them.  I think that may be becuase they were set up under another library and now that I hve a new library file, they don't carry over?  I don't see any of said playlists in the sttandard iTunes interface under playlists, just under the shared however like I said, no files in them.  Also, the share comes and goes as in sometimes I see it sometimes I don't however I am always able to access the NAS from iTunes.  Strange, may be network issues?
    5.  Along those same lines, sometimes the home share just up and dissapears from ATV.  Maybe a network issue?
    6.  Lastly, I just did my first backup/sync to my new laptop with the frsh install of iTunes on it.  It gave me some sort of message that I can only sync to one computer and do I want to overwrite or whatever?  I said no and it did nothing.  After that, I did a back-up however it seemed to go fast and I also did a sunc.  The sync was three steps, used to be like six and I did not notice any of my new apps being copied to it.  Something is wrong here but not sure what to do, ideas?
    Thanks in advance for any and all help.

    #4 - make sure you are sharing out all the items in your itunes library (music, movies, etc)
    you should be able to do that under preferences
    you may also want to check if it is pointing to the correct library
    #1 - home sharing should work on the LAN w/out the need for signing in; the shared libraries should appear under "shared" in the left sidebar of itunes
    think this may resolve the issues you have on #2 as well
    #2 - itunes doesnt seem like it's built for that, but if you have any experience, i'd like to hear it.  this seems why home sharing is the usually the best route to go and just have one instance of itunes managing the library
    but as long as the NAS or any media drives are shared out, connecting to it should not be a problem.  it's the writing to the itunes library that's the issue.  i haven't used Match so can't help there.
    depending on your OS, you can either ctrl-click or option-click on the itunes icon,
    which will also give you the option of selecting a library location upon launching itunes
    you can navigate to the location of the library file and point it to that.
    technically, you could do that but i believe the actual library has to be stored locally
    as long as you still have all the media, then you can just rebuild the library
    but for locally shared items, as in, all inside the home, i would leave icloud out of the equation
    just use itunes sharing and it should be fine

  • Howdy, and some questions

    I have been building my own system with Linux From Scratch for five years, but after two weeks of struggling with the latest Gnome I decided I'm gettin' too old for this stuff.  Arch looks like the perfect next step - plenty of control over what gets installed, but no more headaches with source builds.
    So anyway, I have a few newbie questions...
    Grub was giving me grief (say that three times fast) when it tried to load the root system on /dev/hda6 because it could not find /dev/hda6 - well, yeah, that's because the Arch kernel does some weird translation of ide drives into pseudo-scsi so in the /dev tree it's called /dev/sda6.  I changed menu.lst to load the root from /dev/sda6 and it still wouldn't cooperate... I get some error about kinit: Cannot open root device.
    Bear with me, I'm getting to the question eventually... anyhow, I finally gave up and copied my LFS kernel over to /boot and restored the menu.lst from the LFS partition.  My question is, is there any reason not to build and use my own kernel or will this give pacman fits when it looks for updates?
    Also, I notice bash is not loading the initial path from /etc/login.defs.  It seems to have hard-coded a path that includes /usr/gnu/bin and the like.  How can I get it to read /etc/login.defs?  For now I'm working around the problem by hard-coding paths in /etc/bashrc.
    Finally... what happened to my backspace key?  I'm loading a standard us keyboard, and I don't think I had to do anything special in LFS to make the backspace work, but now I just get funky control characters when I press backspace.
    I'm sure I will have more questions when I start installing other packages (had to chroot to my LFS partition to get into firefox and log on here) but so far I'm happy and I have a three-day weekend ahead of me to tweak Arch within an inch of its life.
    Peter B. Steiger
    Cheyenne, WY

    Howdy,
    I think there's some bug in the installer on the latest ISO, thus the problems (I wonder how that actually slipped through, it's quite a grave bug after all). sdX notation comes from the in-kernel move to a newer subsystem, called libata (common "architecture" for both IDE and SATA drives). You probably would be alright if you booted using kernel26-fallback.img and then ran "mkinitcpio -k 2.6.22 -g /boot/kernel26.img".
    You can safely run your own kernel. That's the beauty of Arch - you can generally do whatever you want to do with it, you just get some nice additions like a package manager, package repositories and a nice community.
    Are you referring to $PATH, which is set in /etc/profile and /etc/profile.d, or some other thing?
    Never got any problems with a backspace in the console in Arch. Perhaps some $TERM issues.
    Enjoy your stay in the Arch world.

  • Cisco VUSB - Feature Suggestions (and some Questions)

    Hello,
    I am using Cisco VUSB with my EA4500 (N900) Wireless Router for my (Samsung ML-1915) USB printer.  It appears that I've had no issues with install and configuration with this on both my Windows 7 Gaming PC and my OS 10.8 MacBook Air.  But I've noticed a few things I would like to recommend as features to add.  Some of these are also questions, that I couldn't find the option/feature for... if so, please let me know where it is (if available).
    - (OS X specific) Allow the application to run as a Menu Bar Extra (similar to DropBox and others).  No need to take up space in my Dock when it's always supposed to be running... having it always active and easily accessible from the Menu Bar would be perfect.
    - If I start a print job from within an applicaton, will it automatically try to conenct to the printer via VUSB?  Or do I always need to open VUSB app and manually connect to the printer before starting the print job?  Could this become automated?
    - I noticed if one computer is connected via VUSB, that the other computer can't connect.  Is there a way to remotely kick out the other computer if not active?  Or, is there a way to setup a 'timeout' if the VUSB connection has been idle for a set time (eg. 5 minutes)?
    Otherwise it looks like this is working great.  Using Bonjour on Windows for Printing to my old Apple Airport was a complete pain and never ended up fully working.  Really happy to find a solution that works with my existing printer, and is easy to setup on both Windows and Mac.
    Solved!
    Go to Solution.

    Hi,
    I agree that with our Mac, a shortcut on the menu bar will be helpful for us users to access this feature.
    Regarding VUSB not being automatic, I believe that “”print job taking turns idea”” is a product design.
    You might need to get printer server if you like no software installed on your PC and no interruptions on your network.

  • LR 4.4 (and 5.0?) catalog: a problem and some questions

    Introductory Remark
    After several years of reluctance this March I changed to LR due to its retouching capabilities. Unfortunately – beyond enjoying some really nice features of LR – I keep struggling with several problems, many of which have been covered in this forum. In this thread I describe a problem with a particular LR 4.4 catalog and put some general questions.
    A few days ago I upgraded to 5.0. Unfortunately it turned out to produce even slower ’speed’ than 4.4 (discussed – among other places – here: http://forums.adobe.com/message/5454410#5454410), so I rather fell back to the latter, instead of testing the behavior of the 5.0 catalog. Anyway, as far as I understand this upgrade does not include significant new catalog functions, so my problem and questions below may be valid for 5.0, too. Nevertheless, the incompatibility of the new and previous catalogs suggests rewriting of the catalog-related parts of the code. I do not know the resulting potential improvements and/or new bugs in 5.0.
    For your information, my PC (running under Windows 7) has a 64-bit Intel Core i7-3770K processor, 16GB RAM, 240 GB SSD, as well as fast and large-capacity HDDs. My monitor has a resolution of 1920x1200.
    1. Problem with the catalog
    To tell you the truth, I do not understand the potential necessity for using the “File / Optimize Catalog” function. In my view LR should keep the catalog optimized without manual intervention.
    Nevertheless, when being faced with the ill-famed slowness of LR, I run this module. In addition, I always switch on the “Catalog Settings / General / Back up catalog” function. The actually set frequency of backing up depends on the circumstances – e.g. the number of RAW (in my case: NEF) files, the size of the catalog file (*.lrcat), and the space available on my SSD. In case of need I delete the oldest backup file to make space for the new one.
    Recently I processed 1500 photos, occupying 21 GB. The "Catalog Settings / Metadata / Automatically write changes into XMP" function was switched on. Unfortunately I had to fiddle with the images quite a lot, so after processing roughly half of them the catalog file reached the size of 24 GB. Until this stage there had been no sign of any failure – catalog optimizations had run smoothly and backups had been created regularly, as scheduled.
    Once, however, towards the end of generating the next backup, LR sent an error message saying that it had not been able to create the backup file, due to lack of enough space on the SSD. I myself found still 40 GB of empty space, so I re-launched the backup process. The result was the same, but this time I saw a mysterious new (journal?) file with a size of 40 GB… When my third attempt also failed, I had to decide what to do.
    Since I needed at least the XMP files with the results of my retouching operations, I simply wanted to save these side-cars into the directory of my original input NEF files on a HDD. Before making this step, I intended to check whether all modifications and adjustments had been stored in the XMP files.
    Unfortunately I was not aware of the realistic size of side-cars, associated with a certain volume of usage of the Spot Removal, Grad Filter, and Adjustment Brush functions. But as the time of the last modification of the XMP files (belonging to the recently retouched pictures) seemed perfect, I believed that all my actions had been saved. Although the "Automatically write changes into XMP" seemed to be working, in order to be on the safe side I selected all photos and ran the “Metadata / Save Metadata to File” function of the Library module. After this I copied the XMP files, deleted the corrupted catalog, created a new catalog, and imported the same NEF files together with the side-cars.
    When checking the photos, I was shocked: Only the first few hundred XMP files retained all my modifications. Roughly 3 weeks of work was completely lost… From that time on I regularly check the XMP files.
    Question 1: Have you collected any similar experience?
    2. The catalog-related part of my workflow
    Unless I miss an important piece of knowledge, LR catalogs store many data that I do not need in the long run. Having the history of recent retouching activities is useful for me only for a short while, so archiving every little step for a long time with a huge amount of accumulated data would be impossible (and useless) on my SSD. In terms of processing what count for me are the resulting XMP files, so in the long run I keep only them and get rid of the catalog.
    Out of the 240 GB of my SSD 110 GB is available for LR. Whenever I have new photos to retouch, I make the following steps:
    create a ‘temporary’ catalog on my SSD
    import the new pictures from my HDD into this temporary catalog
    select all imported pictures in the temporary catalog
    use the “File / Export as Catalog” function in order to copy the original NEF files onto the SSD and make them used by the ‘real’ (not temporary) new catalog
    use the “File / Open Catalog” function to re-launch LR with the new catalog
    switch on the "Automatically write changes into XMP" function of the new catalog
    delete the ‘temporary’ catalog to save space on the SSD
    retouch the pictures (while keeping and eye on due creation and development of the XMP files)
    generate the required output (TIF OR JPG) files
    copy the XMP and the output files into the original directory of the input NEF files on the HDD
    copy the whole catalog for interim archiving onto the HDD
    delete the catalog from the SSD
    upon making sure that the XMP files are all fine, delete the archived catalog from the HDD, too
    Question 2: If we put aside the issue of keeping the catalog for other purposes then saving each and every retouching steps (which I address below), is there any simpler workflow to produce only the XMP files and save space on the SSD? For example, is it possible to create a new catalog on the SSD with copying the input NEF files into its directory and re-launching LR ‘automatically’, in one step?
    Question 3: If this I not the case, is there any third-party application that would ease the execution of the relevant parts of this workflow before and/or after the actual retouching of the pictures?
    Question 4: Is it possible to set general parameters for new catalogs? In my experience most settings of the new catalogs (at least the ones that are important for me) are copied from the recently used catalog, except the use of the "Catalog Settings / Metadata / Automatically write changes into XMP" function. This means that I always have to go there to switch it on… Not even a question is raised by LR whether I want to change anything in comparison with the settings of the recently used catalog…
    3. Catalog functions missing from my workflow
    Unfortunately the above described abandoning of catalogs has at least two serious drawbacks:
    I miss the classification features (rating, keywords, collections, etc.) Anyway, these functions would be really meaningful for me only if covering all my existing photos that would require going back to 41k images to classify them. In addition, keeping all the pictures in one catalog would result in an extremely large catalog file, almost surely guaranteeing regular failures. Beyond, due to the speed problem tolerable conditions could be established only by keeping the original NEF files on the SSD, which is out of the question. Generating several ‘partial’ catalogs could somewhat circumvent this trap, but it would require presorting the photos (e.g. by capture time or subject) and by doing this I would lose the essence of having a single catalog, covering all my photos.
    Question 5: Is it the right assumption that storing only some parts (e.g. the classification-related data) of catalog files is impossible? My understanding is that either I keep the whole catalog file (with the outdated historical data of all my ‘ancient’ actions) or abandon it.
    Question 6: If such ‘cherry-picking’ is facilitated after all: Can you suggest any pragmatic description of the potential (competing) ways of categorizing images efficiently, comparing them along the pros and contras?
    I also lose the virtual copies. Anyway, I am confused regarding the actual storage of the retouching-related data of virtual copies. In some websites one can find relatively old posts, stating that the XMP file contains all information about modifying/adjusting both the original photo and its virtual copy/copies. However, when fiddling with a virtual copy I cannot see any change in the size of the associated XMP file. In addition, when I copy the original NEF file and its XMP file, rename them, and import these derivative files, only the retouched original image comes up – I cannot see any virtual copy. This suggests that the XMP file does not contain information on the virtual copy/copies…
    For this reason whenever multiple versions seem to be reasonable, I create renamed version(s) of the same NEF+XMP files, import them, and make some changes in their settings. I know, this is far not a sophisticated solution…
    Question 7: Where and how the settings of virtual copies are stored?
    Question 8: Is it possible to generate separate XMP files for both the originally retouched image and its virtual copy/copies and to make them recognized by LR when importing them into a new catalog?

    A part of my problems may be caused by selecting LR for a challenging private project, where image retouching activities result in bigger than average volume of adjustment data. Consequently, the catalog file becomes huge and vulnerable.
    While I understand that something has gone wrong for you, causing Lightroom to be slow and unstable, I think you are combining many unrelated ideas into a single concept, and winding up with a mistaken idea. Just because you project is challenging does not mean Lightroom is unsuitable. A bigger than average volume of adjustment data will make the catalog larger (I don't know about "huge"), but I doubt bigger by itself will make the catalog "vulnerable".
    The causes of instability and crashes may have NOTHING to do with catalog size. Of course, the cause MAY have everything to do with catalog size. I just don't think you are coming to the right conclusion, as in my experience size of catalog and stability issues are unrelated.
    2. I may be wrong, but in my experience the size of the RAW file may significantly blow up the amount of retouching-related data.
    Your experience is your experience, and my experience is different. I want to state clearly that you can have pretty big RAW files that have different content and not require significant amounts of retouching. It's not the size of the RAW that determines the amount of touchup, it is the content and the eye of the user. Furthermore, item 2 was related to image size, and now you have changed the meaning of number 2 from image size to the amount of retouching required. So, what is your point? Lots of retouching blows up the amount of retouching data that needs to be stored? Yeah, I agree.
    When creating the catalog for the 1500 NEF files (21 GB), the starting size of the catalog file was around 1 GB. This must have included all classification-related information (the meaningful part of which was practically nothing, since I had not used rating, classification, or collections). By the time of the crash half of the files had been processed, so the actual retouching-related data (that should have been converted properly into the XMP files) might be only around 500 MB. Consequently, probably 22.5 GB out of the 24 GB of the catalog file contained historical information
    I don't know exactly what you do to touch up your photos, I can't imagine how you come up with the size should be around 500MB. But again, to you this problem is entirely caused by the size of the catalog, and I don't think it is. Now, having said that, some of your problem with slowness may indeed be related to the amount of touch-up that you are doing. Lightroom is known to slow down if you do lots of spot removal and lots of brushing, and then you may be better off doing this type of touch-up in Photoshop. Again, just to be 100% clear, the problem is not "size of catalog", the problem is you are doing so many adjustments on a single photo. You could have a catalog that is just as large, (i.e. that has lots more photos with few adjustments) and I would expect it to run a lot faster than what you are experiencing.
    So to sum up, you seem to be implying that slowness and catalog instability are the same issue, and I don't buy it. You seem to be implying that slowness and instability are both caused by the size of the catalog, and I don't buy that either.
    Re-reading your original post, you are putting the backups on the SSD, the same disk as the working catalog? This is a very poor practice, you need to put your backups on a different physical disk. That alone might help your space issues on the SSD.

  • Tree component bug (?) and some questions

    Hi! I have some problems and questions about tree component.
    Problems:
    1. I have an expanded tree with ~300 items. Each item label
    displayed in 2-3 strings. After QUICK tree scrolling using mouse
    wheel (I make 3-5 scrolls) for most of items displayed only last
    string and one empty string :(
    Bug of tree renderer? Is it fixable?
    Questions:
    1. Can I have font color X for tree item 1 and font color Y
    for tree item 2?
    2. I have a tree with ~300 items. Expand/Collapse tree
    operations takes 5 to 10 seconds on Core2Duo. Is it possible to
    speed up this operations?
    Code:

    Hello.
    About problem 1.
    I faced this problem several times, cann't understand the
    problem. May be it's a bug.
    Questions.
    1. Of course you can. Write itemRenderer for this.
    2. Tree has effects for expanding and collapse events, you
    can reduce times for them.

  • Downloaded Trial - An issues and some questions

    I just downloaded the trial and have some issues and questions I hope someone here can help with.
    My problem:
    I am unable to use the Media encoder to create any output files. I get the error, "The source and output color space are not compatible or a conversion does not exist". This happens when I attempt compositing video clips. If I use a single video clip I don't have this issue.
    Using the Export - Movie menu option works (for the same composite), but the amount of time it takes (a hour and 20 minutes for a 4 minute movie) too much and I can't see myselfe going too far or dooing too much work in this fashion. I believe my hardware specs are well above the required specs. If this normal?
    Questions:
    1. I can't import mpeg video. Is this by design? If so why?
    3. I can't export to wmv? Is this by design, if so why?
    What is the true minimum hardware spec. for someone intending to edit/create HDV? In other words, what is someone out there using that works. I'm really not looking to have to spend a ton of money on a new machine :).

    John,
    Thank you for the info. So I take it I can use mpeg as my source in the purchased product?
    Do you know why I am not able to use the media encoder options to generate any output? What does the error message mean?
    Jeron,
    I'm not sure what your reply is in response to. (since I didn't ask about HDV) :)
    I would like to hear from anyone about the hardware they're using for HDV work.
    Are the minimum specs for HDV cited on the documentation really a viable editing experience?

  • Minor ThinkPad issues and some questions

    What I'm using
    IBM ThinkPad R40
    Pentium-M 1.4Ghz (Banias)
    Mobility Radeon 7500 with 32MB RAM
    512MB RAM
    Hitachi DVD-ROM/CD-burner combo
    PCMICA Gigabit Ethernet card using Realtek r8139 chip (as the onboard one is broke)
    Hitachi 40GB 4200rpm HDD
    I don't use wireless.
    How's Arch installed? (details)
    I've used the "base" install CD of the recently released 2007.05 "Duke". I basically took all the default suggestions in regards to partitions (as I'm testing an install process that works best on this ThinkPad).
    For the power saving bits:
    I installed acpid, powernowd, and powersave.
    As well, from the "unsupported" AUR: kpowersave (0.7.x one) and kima.
    For the kernel, its the default one but recompiled with resume as one of the HOOKS. Other than that, no other changes were made.
    In /etc/rc.conf,
    for the MODULES section, I've added: speedstep_centrino and ibm_acpi
    for the DAEMONS section, I've added: acpid, powernowd, and powersaved
    In /etc/rc.local,
    I added: echo enable,0xffff >/proc/acpi/ibm/hotkey
    In /etc/modprobe.conf,
    I added: options ibm_acpi experimental=1
    For some reason the swap partition was never activated, so this is necessary: mkswap /dev/sda2
    (Is that a bug in the 2007.05 release?)
    For Suspend-to-Disk to work, I had to add the following in /boot/grub/menu.lst
    On the line that starts with kernel, I added: resume=/dev/sda2
    I got everything I needed to work.
    (Everything that mattered to me)
    This includes the ThinkPad Fn+keys (F3, F4, and F12), and the two grey keys for Forward and Back in KDE and Firefox. (ThinkPad owners will know these F19 and F20 key through xmodmap). Suspend-to-RAM and Suspend-to-Disk works. Dynamic CPU throttling works. Manually setting the CPU speed works as well (via kpowersave). Through kima KDE applet, I can view CPU speed, uptime and temp on the taskbar.
    What's the problem? (and questions)
    But not everything is perfect...
    (1) When I do Suspend-to-Disk OR when I shutdown the system, the hard disk does NOT shutdown properly (like it was in Windows or my old install of Arch Linux). Instead of a normal cricket sound of the HDD head parking, I get the same sound as if I pushed the power button (no battery installed) while its on. A high-pitched shutdown.
    This results in Arch booting up occasionally with a message about HDD not properly shutting down and needing to force a check. This is obviously bad from a data integrity and HDD perspective.
    Has anyone else experienced this? How did you resolve this issue?
    (2) Once I finished testing Suspend-to-RAM, I check dmesg. I get some lines referring to a problem with the USB ports:
    hub 1-0:1.0: over-current change on Port 3
    (happens on Port 1 and 4 as well).
    Anybody have an idea what its complaining about? Should I ignore it?
    (I don't have any USB devices plugged in)
    Other questions?
    (3) I previous installed Arch Linux 0.72 with hibernate-script and the kernel26beyond kernel without the two issues as noted previously. Since the "Beyond" kernel is no longer being maintained, Is it OK to use the recently released default kernel with hibernate-script? Will it work?
    (4) If I use the hibernate-script approach to Suspend-to-RAM and Suspend-to-Disk, what applet or app does on use in KDE to allow you to set the desired speed or power scheme? (I like how kpowersave does things, but that reads off powersave)
    As you can see, I almost have my Thinkpad working the way I want it with Arch Linux. There's just a few niggles that need to be resolved. I hope you can help. Thanks in advance.

    Well, I tried both approaches: Kernel with Suspend2 and the regular Kernel (2.6.21.3-1)
    Both suffer the same HDD issue, but under slightly different scenarios.
    They both will get the problem when you suspend to disk (call it STD from now on). But if you enable "RediSafe-like" function on the Suspend2 side, the HDD problem doesn't occur. It shuts down perfectly. However, when I resume back from STD, I lose all IBM button functionality. When I go to shutdown, the HDD problem appears.
    With the regular Kernel, the HDD shutdown issue happens regardless if I STD or shutdown the system.
    Either way, I'll disable STD use for now. At least until Arch's regular kernel and Suspend2 one moves to 2.6.22 Kernel. So it IS a Kernel related issue.
    Here's how I did the Fn keys for my ThinkPad R40 (Pentium-M 1.4Ghz with Mobility Radeon 7500)
    In /etc/acpi/events, I created the following files with their respective contents using Suspend2 for STD.
    Please note: I don't have wireless on this notebook, that's why you won't see a Fn-F5 one.
    Fn-F3  (Turn off and on display backlight. or Blank screen)
    event=ibm/hotkey HKEY 00000080 00001003
    action=/etc/acpi/events/backlight.sh
    backlight.sh (You will need radeontool from the AUR. Be sure to make this executable. ie: chmod +x backlight.sh)
    #!/bin/bash
    if [ -e /tmp/backlightoff ]
    then
            radeontool light on
            rm /tmp/backlightoff
    else
            radeontool light off
            touch /tmp/backlightoff
    fi
    Press Fn-F3 to blank screen. Press Fn-F3 again to get out of it.
    Fn-F4 (Suspend to RAM)
    event=ibm/hotkey HKEY 00000080 00001004
    action=echo -n mem >/sys/power/state
    Fn-F7 (Switches between VGA and your notebook's display. I haven't tested this, but its what I got form the ibm_acpi README!)
    event=ibm/hotkey HKEY 00000080 00001007
    action=echo video_switch > /proc/acpi/ibm/video
    Fn-F12 (Need sudo and add your user into the wheel as well as an entry into /etc/sudoers.)
    event=ibm/hotkey HKEY 00000080 0000100c
    action=sudo hibernate
    lid (Causes same function as Fn-F4. That is, Suspend to RAM)
    event=button/lid
    action=echo -n mem >/sys/power/state
    As mention the problem with the above, is once I resume from STD with Suspend2 Kernel, I lose their functionality.
    Maybe I should be changing handler.sh in /etc/acpi instead of adding stuff to events?

  • MiFi2200 and some questions about VZAccess

    I've been using my 2 MiFi2200s as modems in my desktop...I recently bought a Laptop and I use them as modems on it as well.  My question is, why will it show me charging status on my Desktop...but not on my laptop?  I have 3 cards total (2 activated) but 2 frequently tell me they're critically low only by blinking the red light...It won't tell me via VZ on my Laptop.  Is it a setting i'm missing, or just a quirk?  Thanks .
    Also, 1 last thing...I have 2 separate wall chargers as well.  Neither charger seems to charge the battery...it only seems to charge when they're plugged into a computer.  Is it just really slow or are both of my chargers no good?

    That's 2.4.. G network would only work in this case since you don't have N cards on your computers
    Check out this article so you know what security you can set on your computers, some devices are not WPA capable so you might want to think about that too.
    Setting-Up WEP, WPA or WPA2 Wireless Security on a Linksys Wireless Router
    "Sometimes your knight in shining armor is just a retard in tin foil.."-ARCHANGEL_06

  • WRT160N v2 - problems with WPA2 and some questions about N

    Hi guys
    WRT160N V2 (F/W 2.0.03)
    So I've been operating on Wireless G with WEP for a good while because of the problems I had in the past. The other day I decided to try again with the "N" and the WPA2... and spent 4 hours going around in circles; now I'm on B/G Mixed/WPA-PSK.
    When I tried to use "N", (by enabling "Mixed" with everything else set to "Auto") even my HTC Sensation would only pick up a "G" connection... and all my laptops (which are not "N" capable) would not connect at all. So I ditched that and looked at the encryption and authentication.
    When I enabled "WPA2" "AES/TKIP", my machines and phones would either not connect at all, failed to get an IP address, or connected fine with an IP address but routing wasn't working... and these symptoms seemingly occurred randomly - no direct device correlation and one device might show all the symptoms at different times.
    I dropped down to WPA, and (touch wood) everything seems to be working at the moment... but I don't trust it. Can you tell me what I need to do to get this working on N and WPA2? Have I missed a firmware update?
    p.s. Is this model 2.4GHz or 5GHz please?
    Cheers

    That's 2.4.. G network would only work in this case since you don't have N cards on your computers
    Check out this article so you know what security you can set on your computers, some devices are not WPA capable so you might want to think about that too.
    Setting-Up WEP, WPA or WPA2 Wireless Security on a Linksys Wireless Router
    "Sometimes your knight in shining armor is just a retard in tin foil.."-ARCHANGEL_06

  • My settings and some questions r.e. overclocking!

    Hello everybody,
    This is my first MSI mobo, I used to have a DFI Lanparty Sli-D (with 144 Opty) - it was OK but I really wanted a CD2 (CoreDuo2) and the issues I had with my DFI I just felt that I should go with something a bit simpler - what a fool I am.. lol.
    Anyway, I've flashed to Bios 716 and most of my issues are (kind of) sorted - when Windows XP reboots; it still requires a power off! Also - intermittently my USB mouse requires pulling out and reinserting once XP is loaded.
    Here are my bios changes:
    Disable all but the Primary IDE Master/Slave (so my DVDRW can work)
    Disable floppy, COM and Parallel.
    Disable C1E, EIST, Virtualisation
    Disable POST Logo (it's horrible quality!)
    Assign all my SATA drives to RAID / Force 2nd GEN
    Change the disk priority and sequence to CDROM first, RAID 2nd
    Enable Legacy USB support for Keyboard and Mouse
    Disable Intel Boot ROM
    Initiate PCI-E first, Change Payload to 512MB
    For Overclocking; I use HIGH settings:
    Set DRAM timings manual to 4-4-4-12 (my RAMS rated timing)
    Set memory ratio to 1:1
    Set memory Voltage to 2.2v
    Set CPU Voltage to 1.40v
    Set FSB to 320
    Set PCIE voltage to 1.7v
    Disable Auto Detect PCI CLK
    Disable Spread Spectrum
    O.K! now my questions:
    Should I be disabling Virtualisation? what is it?;
    What MAX voltage should I be applying to Memory, PCIE vCore? (I'm on AIR: decent case airflow (CM Stacker), stock CPU cooler);
    With the above HIGH settings I hit a 360FSB  - if I use (EXTREME settings) my vCore to 1.43v and PCIE to 1.75v, mem to 2.3v and I can get a 375 FSB (not tried higher FSB yet because sound in Windows jutters);
    Do I need water cooling at either my high or extreme settings?;
    What TEMP monitor can I use? Something that is PRECISE (cos I know software monitors sucks - MSI products reported 125c on my CPU when I had v7 bios) 
    Thanks to all for any questions answered!!!
     - FrobinRobin

    Quote from: FrobinRobin on 31-August-06, 22:53:15
    Hello everybody,
    This is my first MSI mobo, I used to have a DFI Lanparty Sli-D (with 144 Opty) - it was OK but I really wanted a CD2 (CoreDuo2) and the issues I had with my DFI I just felt that I should go with something a bit simpler - what a fool I am.. lol.
    Anyway, I've flashed to Bios 716 and most of my issues are (kind of) sorted - when Windows XP reboots; it still requires a power off! Also - intermittently my USB mouse requires pulling out and reinserting once XP is loaded.
    Here are my bios changes:
    Disable all but the Primary IDE Master/Slave (so my DVDRW can work)
    Disable floppy, COM and Parallel.
    Disable C1E, EIST, Virtualisation
    Disable POST Logo (it's horrible quality!)
    Assign all my SATA drives to RAID / Force 2nd GEN
    Change the disk priority and sequence to CDROM first, RAID 2nd
    Enable Legacy USB support for Keyboard and Mouse
    Disable Intel Boot ROM
    Initiate PCI-E first, Change Payload to 512MB
    For Overclocking; I use HIGH settings:
    Set DRAM timings manual to 4-4-4-12 (my RAMS rated timing)
    Set memory ratio to 1:1
    Set memory Voltage to 2.2v
    Set CPU Voltage to 1.40v
    Set FSB to 320
    Set PCIE voltage to 1.7v
    Disable Auto Detect PCI CLK
    Disable Spread Spectrum
    O.K! now my questions:
    Should I be disabling Virtualisation? what is it?;
    What MAX voltage should I be applying to Memory, PCIE vCore? (I'm on AIR: decent case airflow (CM Stacker), stock CPU cooler);
    With the above HIGH settings I hit a 360FSB  - if I use (EXTREME settings) my vCore to 1.43v and PCIE to 1.75v, mem to 2.3v and I can get a 375 FSB (not tried higher FSB yet because sound in Windows jutters);
    Do I need water cooling at either my high or extreme settings?;
    What TEMP monitor can I use? Something that is PRECISE (cos I know software monitors sucks - MSI products reported 125c on my CPU when I had v7 bios) 
    Thanks to all for any questions answered!!!
     - FrobinRobin
    1:  You can leave it on or turn it off.  I think overclocking its recommended to turn it off.  It had something to do with running two O/S's at the same time and spliting up the cpu resources so most people don't use it.
    2:  Depends on the specs on your memory and if you want to take the chance of voiding your warranty.  Set the PCIE voltages back to 1.5 unless your memory is having problems.  Thats prob why you have to turn the power off when rebooting windows.
    3:  Use coretemp.  It reads a little high but people say their cpu is still stable.
    Also, are you using a pci-e video card?  If so set your payload to 4096.

  • TU EE SPREE! And some questions.

    So I got EE of a Judgement today that made me think, let me look and make a list of fall off dates, why I hadn't done this until now is beyond me. I had 5 more derogatory incl. In BK accts all scheduled to fall off less than 6 months from now so I gave it a shot. All 5 deleted!
    I checked my score after judgement ee and the judgement was gone but no score increase. I'm curious if the other 5 will give me a bump or not since i just updated my report earlier today I'm not going to do it again just for funsies....Since they were included in BK maybe they've served their time as far as scoring goes?
    Anyone with experience with this please I would love some insight! TIA!

    While there are exceptions, in most cases the BK will overshadow everything prior.saberkitty wrote:
    So I got EE of a Judgement today that made me think, let me look and make a list of fall off dates, why I hadn't done this until now is beyond me. I had 5 more derogatory incl. In BK accts all scheduled to fall off less than 6 months from now so I gave it a shot. All 5 deleted!
    I checked my score after judgement ee and the judgement was gone but no score increase. I'm curious if the other 5 will give me a bump or not since i just updated my report earlier today I'm not going to do it again just for funsies....Since they were included in BK maybe they've served their time as far as scoring goes?
    Anyone with experience with this please I would love some insight! TIA! 

  • 64 bit mode and some questions

    Happy New Year all,
    Just once I'd like to start up in 64 bit mode. I've tried to do it several times without success and I don't know what the problem is. Hardware, software?
    I'm running two 10.6.2 systems, one on 1 SSD 3 drive Raid 0 array, one on another. Both internal and I used SoftRaid to set both up.
    Each time I try, I get the gray screen with Apple logo and the spinning wheel. After a time I get the circle with a bar through it and have to use the power button to shut down. When I tried it last time on one array, the above happened and both arrays unmounted. I could only get my disks mounted by booting from a clone and both systems, the 32 and 64 bit, did not show in the Startup Disk Pref Pane. Both systems showed up when I pressed the Option key at start up but I couldn't start up from either.
    I should have remembered to press the 3 and 2 keys to get access to those drives again but a PRAM reset got my 32 bit system back again. From there with a little research I saw mention of the 3 and 2 keys again, tried it on the 64 bit system and got it to boot in 32 bit.
    Long story short though I don't know why I can't at least boot into 64 bit mode. Could it have something to do with the SSDs? Could it have something to do with SoftRaid? Why would both arrays dismount when only one should be affected?
    Is it a common issue for many 64 bit compatible Macs to not be able to start up in 64 bit mode for any one of a variety of reasons? I should say that neither of my systems can start up that way.
    Thanks, any insight is appreciated.

    Talk to Mark and look for any notes on SoftRAID and 64-bit mode;
    Try installing Mac OS only to one of your SSDs (break the array).
    Just a nice small clean Apple ONLY OS.
    "6" + "4" ought to get you into 64-bit kernal mode.
    Any PCIe card/driver you have to be concerned with?
    ==================
    SoftRAID 3.6.8 is compatible with Snow Leopard in 32 bit mode. You will be asked to replace the newer version of the SoftRAID driver that is bundled with Snow Leopard, with the 3.6.8 driver.
    +A newer, *64 bit version of the SoftRAID driver* is included with Snow Leopard for users in 64 bit mode,+ which will allow you to keep existing volumes and access data from the 64 bit version of Snow Leopard. Currently, only the X-Serve boots into 64 bit mode by default. *SoftRAID 3.6.8 will not work in 64 bit mode.*
    SoftRAID 4.0 is coming soon!
    SoftRAID version 4 will be a full 64 bit Snow Leopard compatible version. We don't have a firm shipping date, but will try to have a "customer beta" in a few weeks for those who want to beta test SoftRAID 4.0.
    The SoftRAID 4.0 official release will probably be in October. Upgrades will cost $69, except for 2009 purchasers, who will get it free.
    http://www.softraid.com/

Maybe you are looking for