Still trying to understand "reflection"

I came across the program below and can't seem to get anything to "pass".
public class Test {
    public static void main(String[] args) {
        int passed = 0;
        int failed = 0;
        for (String className : args) {
            try {
                Class c = Class.forName(className);
                c.getMethod("test").invoke(c.newInstance());
                passed++;
            } catch (Exception ex) {
                System.out.printf("%s failed: %s%n", className, ex);
                failed++;
        System.out.printf("passed=%d; failed=%d%n", passed, failed);
}Can anyone offer any suggestions to help me understand what's going on here?
Thanx to one and all.

public class Test {
    public static void main(String[] args) {
        int passed = 0;
        int failed = 0;
        for (String className : args) {
            try {
                Class c = Class.forName(className);
                c.getMethod("test").invoke(c.newInstance());
                passed++;
            } catch (Exception ex) {
                System.out.printf("%s failed: %s%n", className, ex);
                failed++;
        System.out.printf("passed=%d; failed=%d%n", passed, failed);
class Mystery {
     public void test() {
          System.out.println("Help, I've been invoked!");
}Run with:
java -cp . Test MysteryOutput:
Help, I've been invoked!
passed=1; failed=0~

Similar Messages

  • Still trying to understand iPhoto's trash

    So I have decided to get really organized in iPhoto and am using the Albums to do my main organization to create slide shows etc. I chose Albums over Events because I could not move the photos around in Events.
    Then I started deleting away the shots I thought were not good. Just so happens I was a little to delete-key happy and I deleted some that I shouldn't have.
    Now I go to iphoto trash. First I cannot look at the photos at a large scale by double clicking on it. When I use the toggle at the bottom to enlarge all photos I quickly loose track of the photo I was wanting to look at. Even if that photo is selected it still does not center the enlargement on that photo. (annoying)
    So to look at it I have to restore it or move it back to the library. Being a person who is using Albums to organize my photos, I would have preferred to move it directly into the Album. Not possible.
    Now what I have deciphered as the only way to get out of this mess ...
    First I have to restore the photo then go back to Events and find that single photo in the mix of hundreds of other photos, and then if it is indeed the photo I was looking for, then I move it back to the Album.
    This is sure a lot of work, and makes me rather disenchanted with iPhoto. I think it should be easier than this!
    So my questions :
    Are Albums the right way to organizing photos in iPhoto, or is there a better way? And why can't we organize and move photos around simply in Events?
    Is it possible to indefinitely delete one photo from the trash and not empty the whole trash in iPhoto?
    I have not found a conversion feature in iPhoto similiar to iTunes... for example sometimes I have some photos which I would like to keep in high resolution, but others I could reduce their size to make space. I know I can use photoshop to do this but its very time-consuming. Is there another way to batch process photos?
    thanks for taking the time answer my questions!

    Melissa
    Welcome to the Apple user to user assistance forums
    Are Albums the right way to organizing photos in iPhoto, or is there a better way?
    Yes - albums and smart albums( using keywords) are the way. When you keyword and rate your photos smart albums become very powerful
    And why can't we organize and move photos around simply in Events?
    Because that is not the way iPhoto works - this is a user forum and we can tell you how it works - you need to talk to Apple about why it works like it does and to suggest changes - iPhoto menu ==> provide iPhoto feedback
    Is it possible to indefinitely delete one photo from the trash and not empty the whole trash in iPhoto?
    No
    I have not found a conversion feature in iPhoto similiar to iTunes... for example sometimes I have some photos which I would like to keep in high resolution, but others I could reduce their size to make space. I know I can use photoshop to do this but its very time-consuming. Is there another way to batch process photos?
    iPhoto always keeps the original, the digital negative - you can not change that
    LN

  • Trying to understand how iPhoto keeps track of pictures...

    I am trying to understand how iPhoto stores & organizes pictures.
    I moved JPEGs from a portable drive to my hard drive, set iPhoto's preferences to not copy pictures when importing them, then imported them. Unfortunately, all pictures from a certain import has duplicated in iPhoto.... so I "moved to trash" all pictures from that import and tried importing again since the JPEGs were still where I moved them to. When importing, it said they were duplicated .
    I am confused. If I "move to trash", I assumed it got rid of whatever index (and preview cache) to that particular JPEG. I was actually surprised it did not delete the actual JPEG but I'm ok with that.
    Can someone help explain this behavior?
    Thanks
    -Ed

    iPhoto is a relational database program
    In the strongly recommended managed library (you have chosen to ignore this recommendation and use a referenced library) imported photos are copied to the iPhoto library and stored in the originals folder, a thumbnail jPEG is created and places in the data folder and when any modification is made (including autorotation) a modified version of the photo is created and placed in the modified folder. iPhoto updates its database entries to reflect everything it does.
    It is critical that you do not make any modifications of any sort to the content or structure of the iPhoto library - doing so is likely to corrupt the library and cause you to lose data.
    When you use the referenced mode which you are doing (and which is not recommended) you are taking total responsibility for the original photos which included not moving or modifying them while iPhoto is referencing them
    Unfortunately, all pictures from a certain import has duplicated in iPhoto.... so I "moved to trash" all pictures from that import
    Did you do this with the iPhoto trash? or did you use the finder to modify the contents of the iPhoto library.
    If I "move to trash", I assumed it got rid of whatever index (and preview cache) to that particular JPEG. I was actually surprised it did not delete the actual JPEG but I'm ok with that.
    again - iPhoto trash or finder trash. If you move a photo to the iPhoto trash and empty it all traces of that photo in the iPhoto library will be removed - nothing will be done to any file outside of the iPhoto library -- ever
    LN

  • Trying to understand Android OS updates delays

    This is not another hate mail, it´s more about trying to understand the facts and motives regarding the OS updates (or lack of).
    Hopefully this material will arrive at someone from Sony with enough power to do something about it.
    Ever since I can remember, Sony has been THE brand for electronics. I can´t remember a TV or VCR in my house that it wasn´t Sony, and they lasted for LOTS of years.
    When a couple years ago I finally had the money (and the need) for a Smartphone, i chose the X10 Mini, which is a great little phone from a great brand, but it´s stuck at Android 2.1... which makes it a crippled android nowadays...
    It really bothered me the lack of OS updates, so I chose to buy a Galaxy Nexus, but it´s really expensive in my country. So my second option was a Galaxy Ace 2, but they haven´t arrived to my country yet, so I went with my third option: Xperia Sola, knowing beforehand that it´s a great phone, a great brand, but I might get slow OS updates.
    I bought it about 10 days ago when I saw they were starting to roll out the updates.
    10 days later I still don´t have my update. And not only that, but I don´t see much light at the end of the tunnel...
    I found a thread with the SI numbers that were updated, and there were a bunch on Oct 1, another bunch on Oct 4, and one code on Oct 8, and no other update since...
    I also read that those who did get the update, were having bugs with the OS, and also found threads from other Xperia models, whose updates began rolling 3 months ago, and there is still people who hasn´t gotten the update...
    As a customer, and a owner/CEO of a small company, I have a really hard time understanding how a HUGE company like Sony can be making such mistakes...
    I have been thinking objective reasons, and I can only think of one. I know it´s a wild guess, but I´m starting to think that our salvation might be the very thing that means our condemnation: CYANOGENMOD!!!
    Think about it: Why would Sony spend more money hiring twice as much programmers, when they can make only one update per phone, and sit down and see how CM releases begin to appear, for all tastes and needs. And... IT´s FREE!!!
    Also, if there is a software related problem (way more likely than a hardware problem), then the CM developers take the fall, instead of Sony. And I´m beginning to see custom OS installers that are more user friendly, so it might be something that they take in accout when neglecting OS updates.
    If that´s the line Sony is following, it´s a very risky move and it won´t work. Sony Mobile will crash and burn, but it´s still a better business plan that "let´s get lazy and make ULTRA slow updates so we don´t spend a lot of money programming".
    If you can´t afford more programmers, stop including so much bells and whistles and make your OS near vanilla. Include a couple of Xperia menus, a custom theme and voila!
    The main reason I wanted the Galaxy Nexus is Vanilla OS, which means inmediate OS updates. Sony on the other hand, takes a year or two to release it after its launch. If they release it at all...
    Another though...  why not stop making that many different phones! Really! There are like 5 Xperia models i can´t tell one from the other... even with specs side by side!
    You are trying to make too many phones and you are failing with all of them! (Software and software updates are also part of the phone, one of the most importants...)
    I know hiring programmers is expensive, but you are sacrificing one of the most expensive values for a company EVER: Customer credibility! Which as you know better than me, takes years to create.
    If Apple had problems with carriers and code aprooving and stuff, they might get away with it, becouse they alone have all the devices. If iOS 6 is delayed a few months, it´s delayed for everyone, and besides Apple fanboys rarely complain about Mac products, but Android is a more independent and educated market.
    I´m not saying that Apple users are ignorant, not at all, but I´m pretty sure most iPhone owners don´t even know what processor or how much RAM their phone has. They just "swallow" the Apple Way of Life. (it´s a good phone becouse Apple says so).
    The Android user on the other side, because of the fragmentation of the market, has many brands and models to choose from. An Android user about to buy a new phone will most likely go online looking for different models, specs, reviews in webs and forums...etc.
    You can´t say "I don´t know how HTC and other companies get their updates so soon, but we take a lot more becouse Google and the Operators must aproove the code.", because there are many other brands that have exactly the same difficulties or more, since they are smaller, and we can SEE online that they are indeed delivering solid and relativly fast updates.
    Did we miss something? Does HTC use Witchcraft to get their code aprooved?
    My underlying point is this: You are getting lazy... VERY lazy with software programming for your phones, and WE KNOW IT!
    It´s not the "difficulties" you claim, becouse every brand has those difficulties.
    This isn´t 1999 you know. We are in the information age. If you lie to us and tell us that your phones have the latest OS, I can go online and see that you don´t (Hello!!!). If I see that the company lies to it´s customers, I will stop buying their products. If I´m so dissapointed with how Sony handles OS updates and their customers queries about it, then I want for the first time ever to sell my Cell Phone becouse I´m not happy with it, or the brand behind it.
    We also live in the "Here and now" age. You can´t expect your customers to read about new Android releases on news and blogs, and wait YEARS with arms crossed waiting for their update... The world doesn´t work like that. Not anymore at least...
    It´s not a matter of how many recources you have, it´s about how you use and balance them. GIVE MORE IMPORTANCE TO SOFTWARE UPDATES! IT´S WAY MORE IMPORTANT THAT YOU THINK! LISTEN TO YOUR CUSTOMERS!!!
    You guys are Sony are smart and design great products, but you are not GOD! You are not our wife, no one has sworn alliagence to you.
    If you stop giving us good products and start lying to us, we hate you and stop giving you our money. Simple as that.
    My sola is beatiful, I love the design, the screen, the hardware... but it hasn´t been updated yet to 4.0, not to mention 4.1, which REALLY is the latest version... so stop advertising that your phones have the latest Android OS, unless you want angry customers switching to other companies, which you are getting.
    I also read some stories that Androind 4.0 was announced for the Xperia PLAY, and then it was called off... Do you have any idea how pissed I would be and ripped off I would be if I bought my phone based in that information and then you say it won´t be available?
    Well, actually right now my possition is worse, since you SAY you will update my phone, but you don´t say when, and I read online about people still not getting their updates months after the rollout started, so I´m in the limbo right now...
    As a company, one of the worse things you can do is calling your customers stupid, and when I see the answers you give in this forum, then I feel insulted. I feel like they are talking to an idiot or ignorant person with the so called "diplomatic" answers, which are basicly empty excuses for not doing your job right.
    You gave us the frikking CDs, DVDs and Blu-Rays!!! Don´t tell us you can´t tweak an already built OS in a year!
    I really hope you change their OS update policies really soon, before you lose the already small cellphone market share you have, or at least change your P.R. and C.M. policies towards a more open one.
    We all are humans and make mistakes, but we customers really appreciate honesty and truth.
    Have an open conversation with your customers! Don´t lie about your shortcomings! Accept then and ask the community to help you solve them, ask them what they biggest problems are, what features are most important to them, how often do they expect updates... LISTEN TO THEM!!
    "Succes is a meneace. It tricks smart people into thinking they can´t lose."
    Ps: Nothing personal with the mods from this forum, I´m not killing the messenger, I know that you can ONLY give the info you are allowed to give, and even if you wanted, you probably don´t know the answers yourself, since you work in the Communications department, not Developement or anything technical, and if you can't give any given info, then they probably won´t give it to you either... My message is to the company as a whole. I just hope you will be a good messenger and give this to whoever needs to read it.

    My bad, it´s closer to 40 the number of phones released hehe
    I know it´s all about money, and I know Sony is obligated to neglect users who haven´t given them money after an x ammount of time. However, it´s not a matter of making the phones obsolete earlier, so the users want to buy a new phone faster and therefore getting more money.
    A person will buy a new phone when he/she has the money to do so and wants to do so.
    It´s not a matter of WHEN. It´s a matter of WHAT.
    The question is not "When will that user buy a new phone?", but rather "When that user buys a new phone, whenever that is, what phone will it be?"
    I have a love/hate relationship with Apple. I would never use a iPhone. I would love having any Mac, if someone gives it to me, but I would never spend my harn earned dollars on such an overpriced piece of hardware over general principals.
    However, i do recognice that Steve Jobs was a business genius. Weather you like or love his ideas and methods, he turned a garage project into the biggest company in the world, with a market value higher than Exxon with 1/3 of it´s assets.
    Apple is a money making machine, and that is where the "hate" part of my relationship comes from.
    However, it surprised me a lot to see that they released iOS 6 for the iPhone 3GS, released in 2008!
    That get´s you thinking, that inside all that "SELL NOW" culture Apple is, they also support their older devices that people bought years ago but can't buy a new phone now. However, when they can do it, it will surely be another iPhone. Because they FEEL that the company listens and cares for them.
    Also if you jump from iOS 6 on the 3 GS to a brand new iPhone 5, the transition will be virtually non-existant, except for Siri and a couple of features.
    However jumping from Android 1.5 or 2.1 to Jelly Bean, might not be so easy on some users, making them more likely to give iPhone a shot.
    Since they have to adapt to another phone anyway, they might as well try the apple...
    And for old users, it gives people a sense of continuity and care about the user. Otherwise we feel like being kicked out of a restaurant right after we payed the bill.

  • Trying to understand, being prompted the file compression rules on saving, or not

    Hello,
    I'm trying to understand something, could I ask for your help, please ?
    After working on a jpg file, when I want to save it, still as jpg, with my Photoshop CS5,
    - sometimes photoshop will just save the picture, and it's done
    - sometimes photoshop will show me the compression dialog, "JPEG Options", in which I can choose the compression ratio, the format options (baseline, baseline optimized, progressive), and have an estimation of the total file size
    While not being prompted any dialog is simpler, and I'll then simply assume Photoshop decides to retain the current image's compression and format rules, I must say I like to be in control, and I'd like to know under what form the file is being saved without having to resort to the much more complex "Save For Web" menu.
    Please, would you know WHAT "triggers" the appearance of the JPEG Options when we close/save a jpeg file, in photoshop ? What makes this menu not to appear, what makes it appear ?
    If there are trivial file operations/changes/filters that necessarily trigger its appearance when we want to save, something like that ? I've tried a variety of these, but I still can't figure it out, sometimes it shows in the end, and sometimes it doesn't.
    Thank you very much if you can help me
    Kind regards,
    Oliver

    @ c.pfaffenbichler
    These are images from various sources, not just one.
    I'm deliberately excluding Save For Web, this completely re-processes everything.
    My purpose, precisely, is to know when photoshop takes the decision to retain the image's "rules", and when photoshop decides to pose us the question, how do we want it saved.
    Simply taking a jpeg image, doing stuff on it, and hitting control-w to close the window, and seeing if it will be an
    - «OK, sure, do you want to save ? You clicked OK to confirm you wanted the changes saved ? Good, now it's closed» or a
    - «please sir, how would you like your image saved, tell me the compression ratio and the format options, thank you»

  • [SOLVED] Trying to understand the "size on disk" concept

    Hi all,
    I was trying to understand the difference between "size" and "size on disk".
    A google search gave plenty of results and I thought I got a clear idea about
    it.. All data is stored in small chunks of a fixed size depending on the
    filesystem and the last chunk is going to have some wasted space which
    will *have* to be allocated.. Thus the extra space on disk.
    However I'm still confused.. When I look at my home folder, the size on disk
    is more than 320 GB, where as my partition is actually less than 80 GB, so
    I guess I'm missing something.. Could somebody explain to me what does
    320 GB of 'size on disk' means?
    Thanks a lot in advance..
    Last edited by geo909 (2011-12-15 23:17:25)

    Hi all,
    Thank you for your replies.. My file manager is indeed pcman fm and
    indeed it seems to be an issue.. In b4data's link the last post reads:
    B-Con wrote:
    FWIW, I found the problem. (This bug is still persistent in v0.9.9.)
    My (new) PCManFM bug report is here: http://sourceforge.net/tracker/index.ph … tid=801864
    I submitted a potential patch here: http://sourceforge.net/tracker/?func=de … tid=801866
    Bottom line is that the file's block size and block count data from the file's inode wasn't being interpreted and used properly. The bug is in PCManFM, not any dependent libraries. Details are in the bug report.
    Since PCManFM is highly used by the Arch community, I figured I should post an update here so that there's some record of it in our community. Hopefully this gets addressed by the developer(s) relatively soon. :-)
    I guess that pretty much explains things.. And I think I did understand the 'size on disk' concept
    anyway
    Thanks again!
    Last edited by geo909 (2011-12-15 23:17:10)

  • Trying to understand the sound system

    Here's my problem. My mic didn't work (neither the front mic nor the line-in in the rear), so after some research and trial and error I found that if I do
    modprobe soundcore
    my mic works on both the jacks
    But here's where my confusion lies. This is the output of lsmod |grep snd before probing explicitly for soundcore
    [inxs ~ ]$ lsmod |grep snd
    snd_hda_codec_analog 78696 1
    snd_hda_intel 22122 1
    snd_hda_codec 77927 2 snd_hda_codec_analog,snd_hda_intel
    snd_hwdep 6325 1 snd_hda_codec
    snd_pcm_oss 38818 0
    snd_pcm 73856 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss
    snd_timer 19416 1 snd_pcm
    snd_page_alloc 7121 2 snd_hda_intel,snd_pcm
    snd_mixer_oss 15275 2 snd_pcm_oss
    snd 57786 8 snd_hda_codec_analog,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_pcm,snd_timer,snd_mixer_oss
    soundcore 6146 2 snd
    [inxs ~ ]$
    So as you can see, soundcore's already loaded, so why do I have to explicitly load it again to get the mic to work?
    Once I add soundcore to my MODULES array and reboot, the lsmod output is also the same as above.
    So my question is -- what does the explicit loading of soundcore do, that is not done by auto-loading of that module?

    Oh... since your topic is Trying to understand the sound system, that puts you (and me) inside the whole world's population... chuckle. But I thought I'd pass along a document written by probably "The" main ALSA developer that I totally stumbled across just 3 days ago.
    Go here:
    http://kernel.org/pub/linux/kernel/people/tiwai/docs/
    and download the flavor of your choice of the "HD-Audio" document, or simply view it online. It documents the deepest dive into the current ALSA snd_hda_* layers and issues that I've found to date (but still leaves me wanting).
    Why that document isn't plastered across the interwebs is beyond me. I only get 11 hits when I search for it... such are the secrets of the ALSA world I guess.
    Last edited by pigiron (2011-08-26 18:26:48)

  • Still trying to get unconfused about drives

    Coming from the Mac world and using a MBP and external storage, all this talk of drives, raids, etc. is making my head spin!
    I know that I will be backing up extensively and can handle the downtime/risk of losing a days data (I'm compulsive about syncing my work on backups as I make progress), so I'm probably going to be avoiding raid since I will likely have 4 drives to start with.
    I'm going to have 1 x Crucial M4 256gb SSD for OS/Programs and most likely (3x) seagate barracuda xt 2tb drives in addition.
    So my first question is, should I be looking at also getting a small SSD for media cache/page files, or am I fine putting this onto a regular harddrive? If I do the latter, should the harddrive have a partition for the page files/cache to help with speed? How big are these files typically? Should I reserve the entire 2tb drive for them?
    I've read through the disk guide, but being a newb to a lot of this I'm still trying to get my head around it.
    So far I think I want:
    1. SSD for os/programs
    2. Drive for page files/media cache (what else goes on here?)
    3. Drive for media/input
    4. Drive for export
    Which drive should my project files go on again?
    Any help clarifying this is much appreciated! When everybody starts taking the math of throughput my head begins spinning still. I'm starting to gain a tiny bit of understanding, but I'm trying to still keep it really simple for myself right now, as I'm building my first workstation and this is only one of many technical concerns I'm currently wrapping my head around!

    should the harddrive have a partition for the page files/cache to help with speed?
    Partitions do not help speed. On the contrary, it reduces speed but increases wear and tear. Compare it to a large loft you have to store your stuff. You can easily walk around your loft and get the stuff you need. That is comparable to a non-partitioned disk. Now imagine that same loft, but partioned into two or more parts with a wall and a door to separate the parts. You have stuff stacked against the wall you need, so you get it, but instead of getting the other stuff you need just  1 foot behind it, you have to walk 12 feet over to the door, open it, walk 12 feet to the other stuff you need, return 12 feet to the door, close it and then go about your thing.  You have just walked 36 feet, where without those partitions 1 foot would have sufficed. Not very efficient, right? Same with partitioning your disk.
    The basic thought on where to put what, is to distribute disk accesses across as many disks as you have. If you have a workflow where you export only a few times, or once when finished, then put your projects and exports together. Remember that SATA is half duplex, so you can only read at one time and write at another time. You can't do both at the same time. When exporting the project is already loaded, the media need to be read and written to the export disk.

  • Trying to understand and learn how to use btrfs

    I have been the past days trying to get my head around how btrfs works
    I have been trying tools like mkinitcpio-btrfs (very poorly documented) and now, if i list the subvolumes i have in certain volume (/) i see i have 4 subvolumes that have been created while playing around.
    If i try to delete them with "sudo btrfs subvolume delete __active", for example, I get a "ERROR: Error accessing '__active'
    What am I doing wrong?
    Also, I cannot get the whole idea about the difference between snapshots and subvolumes, i mean, a snapshot should be a directory that saves the changes made on the fs, s, if you want to roll back those changes, you just have to make btrfs "forget" the changes stored in that directory and move along, but I cannot take the idea of the subvolume thing.....
    As btrfs is quite experimental and the wikis are not noob-proof still, I'd appreciate if someone gave me a hand trying to understand these concept...
    For the moment, I'm just using it on a test computer and on a personal laptop with no fear of data loss,.
    Any help is welcome
    Thanks!
    Last edited by jasonwryan (2013-07-19 23:13:27)

    I honestly think it is probably a better idea to not use mkinitpcio-btrfs.  As mentioned above, it is poorly documented, and for me it has never worked right (if at all).  It is an unofficial AUR package, and unfortunately our wiki still seems to give the false impression that using this package is the way to user btrfs with Arch.
    The way I have my system set up is that in subvolid=0 (the root of the btrfs filesystem) I have a rootfs subvolume and a home subvolume (there are others, but these are what primarily make up my system).  So in my fstab, I basically have two nearly identical lines, but one has no subvol specified and is mounted at /, and the other has 'subvol=home' mounted at /home. 
    So in order to make it so that I can change the root filesystem as I please, instead of having the / fstab entry specify the subvolume, I put it in the kernel command line.  That is, I have 'rootflags=subvol=rootfs' in the kernel command line.  So if I want to change it, I simply change the path to one of the snapshots. 
    Just remember that if you are one who likes a custom kernel, it is likely that you will have to have an initamfs no matter what you compile into your kernel.  For one thing, the kernel has no mechanism for scanning for multiple device btrfs filesystems.  But also, I have read that the kernel itself cannot handle the rootflags kernel command line argument.
    Oracle Linux does something interesting with their default setup.  They are not a rolling release, so this probably wouldn't work so well in Arch Linux, but they actually install the root filesystem (I think it is actually done to subvolid=0) and then after installation of the packages, a snapshot of the root filesystem is made, and the system is setup to boot off of that snapshot.  So it is almost like having an overlayfs on openwrt.  There is always a copy of the original system, and and changes that are being made are being done "on top" of the original.  So in the event of an emergency, yo can always get back to the original working state.
    If you put your root filesystem on something other than the root of the btrfs filesystem (which you should, as it makes the whole setup much more flexible), then you should also set up a mountpoint somewhere to give administrator access to the filesystem from subvolid=0.  For example, I have an autofs mountpoint at /var/lib/btrfs-root.   chose that spot because /var/lib is where devtools puts the clean chroot.  So it seemed as reasonable a place as any.
    You should go to the btrfs wiki, and peruse through the stuff there... not our wiki, but the actual btrfs one, as our wiki is pretty sparse.  There is not all that much content there (not like the Arch wiki), but it does cover the features pretty well.  I mean, there is certainly quite a lot for being information on only a single filesystem, but it shouldn't take you too long to get through it.  There are a few links to articles about midway down the front page.  What really gave me a better grasp of getting started with btrfs were the ones titled "How I Got Started with the Btrfs Filesystem for Oracle Linux" and "How I Use the Advanced Capabilities of Btrfs".

  • Trying to understand BtrFS snapshot feature

    I'm trying to understand how the copy-on-write and Btrfs snapshot works.
    Following simple test:
    <pre>
    # cd /
    # touch testfile
    # ls --full-time testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:04:43.629620401 +0200 testfile
    Test 1:
    # btrfs subvol snapshot / /snap1
    # touch testfile
    # ls --full-time testfile /snap1/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:04:43.629620401 +0200 /snap1/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:07:38.348932127 +0200 testfile
    Test 2::
    # btrfs subvol snapshot / /snap2
    # touch testfile
    # ls --full-time testfile /snap1/testfile /snap2/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:04:43.629620401 +0200 /snap1/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:07:38.348932127 +0200 /snap2/testfile
    -rw-r--r-- 1 root root 0 2012-10-15 12:09:21.769606369 +0200 testfile
    </pre>
    According to the above tests I'm concluding/questioning the following:
    1) Btrfs determines which snapshot maintains a logical copy and physically copies the file to the appropriate snapshot before it is modified.
    a) Does it copy the complete file or work on the block level?
    b) What happens if the file is very large, e.g. 100 GB and there is not enough space on disk to copy the file to the snapshot directory?
    c) Doesn't it have a huge negative impact on performance when a file needs to be copied before it can be altered?

    Hi, thanks for the answers!
    I guess calling it "logical copy" was a bad choice. Would calling the initial snapshot a "hard link of a file system" be more appropriate?
    Ok, so BTRFS works on the block level. I've done some tests and can confirm what you said (see below)
    I find it interesting that although the snapshot maintains the "hard link" to the original copy - I guess "before block image" (?) - there really is no negative performance impact.
    How does this work? Perhaps it is not overwriting the existing file, but rather creating a new file? So the snapshot still has the "hard link" to the original file, hence nothing changed for the snapshot? Simply a new file was created, and that's showing in the current file system?
    It actually reminds me of the old VMS ODS filesystem, which used file versioning by adding a simicolon, e.g. text.txt;1. When modifying the file the result would be text.txt;2 and so on. When listing or using the file without versions, it would simply show and use the last version. You could purge old version if necessary. The file system was actually structured by records (RMS), similar like a database.
    <pre>
    [root@vm004 /]# # df -h /
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda3 16G 2.3G 12G 17% /
    # time dd if=/dev/zero of=/testfile bs=8k count=1M
    1048576+0 records in
    1048576+0 records out
    8589934592 bytes (8.6 GB) copied, 45.3253 s, 190 MB/s
    Let's create a snapshot and overwrite the testfile
    # btrfs subvolume snapshot / /snap1
    # time dd if=/dev/zero of=/testfile bs=8k count=1M
    dd: writing `/testfile': No space left on device
    491105+0 records in
    491104+0 records out
    4023123968 bytes (4.0 GB) copied, 21.2399 s, 189 MB/s
    real     0m21.613s
    user     0m0.021s
    sys     0m3.325s
    <pre>
    So obviously the there is not enough space to maintain the original file and the snapshot file.
    Since I'm creating a complete new file, I guess that's to be expected.
    Let's try with a smaller file, and also check performance:
    <pre>
    # btrfs subvol delete /snap1
    Delete subvolume '//snap1'
    # time dd if=/dev/zero of=/testfile bs=8k count=500k
    512000+0 records in
    512000+0 records out
    4194304000 bytes (4.2 GB) copied, 21.7176 s, 193 MB/s
    real     0m21.726s
    user     0m0.024s
    sys     0m2.977s
    # time echo "This is a test to test the test" >> /testfile
    real     0m0.000s
    user     0m0.000s
    sys     0m0.000s
    # btrfs subvol snapshot / /snap1
    Create a snapshot of '/' in '//snap1'
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 6505736 8221432 45% /
    # time echo "This is a test to test the test" >> /testfile
    real     0m0.000s
    user     0m0.000s
    sys     0m0.000s
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 6505780 8221428 45% /
    # btrfs subvol delete /snap1
    Delete subvolume '//snap1'
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 6505740 8221428 45% /
    The snapshot occupied 40k
    # btrfs subvol snapshot / /snap1
    Create a snapshot of '/' in '//snap1'
    # time dd if=/dev/zero of=/testfile bs=8k count=500k
    512000+0 records in
    512000+0 records out
    4194304000 bytes (4.2 GB) copied, 21.3818 s, 196 MB/s
    real     0m21.754s
    user     0m0.019s
    sys     0m3.322s
    # df -k /
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda3 16611328 10612756 4125428 73% /
    There was no performance impact, although the space occupied doubled.
    </pre>

  • Trying to understand problems that occur when redistributing between two OSPF processes

    Hi all, I'm currently brushing up on my OSPF and trying to understand the problems that can occur when redistributing between two OSPF processes. I have read and understand (I think!) the issues caused by the fact that the same route submitted by two different OSPF processes may not necessarily follow the OSPF rules that one would expect - for example, OSPF preferring intra-area routes to inter-area routes to external routes, but only within the same process. So, if the same route is submitted from two different processes, that rule goes out the window.
    But I'm having some difficulty getting my head around the idea of setting the administrative distance lower in one OSPF process to prefer one domain over the other. I just can't quite follow the example described in this document:
    http://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first-ospf/4170-ospfprocesses.html#twored
    Specifically, in figure 4 where two external networks - external network "N" originating in OSPF domain 1, and external network "M" originating in OSPF domain 2 - are redistributed via two ASBRs. The explanation states:
    This sequence of events could occur occur:
    Router A (Router B) redistributes M into Domain 1, and external M will reach Router B (Router A).
    Because the administrative distance of Domain 1 is lower than Domain 2, Router A (Router B) will install M through Domain 1 and will set to maxage its previous originated LSA (event 1) into Domain 1.
    Because M has been set to maxage in Domain 2, Router A (Router B) will install M though Domain 2 and, therefore, will redistribute M into Domain 2.
    Same as event 1.
    I can't quite work my way through this. I guess it must have something to do with the redistribution of "M" from domain 2 into domain 1 being learned by both ASBRs due to the lower administrative distance assigned to external routes in domain 1, and the original routes through domain 2 being deleted, but then I can't follow the rest of the description. And I can't understand why this would be a problem for network "M" in OSPF domain 2, but NOT for network "N" in OSPF domain 1.
    Any explanation gratefully received!
    Thanks, Graham

    Hello.
    You are right - whenever A and B learns about "M" from Domain 2, they craft LSA for domain 1 and inject it simultaneously. They learn each other's LSAs simultaneously and withdraw (set timer to 3600) for previous LSAs. And it might flap infinitely.
    If they don't learn LSA simultaneously (let's say that A is much faster then B), then there will be no flaps, but B would learn all Domain 2 routes (not just redistributed) via Domain 1.
    And later you will observe routing loop (when you stop advertising M from D): A knows "M" from Domain 2 and injects into Domain 1, B knows from A via Domain 1 and injects into Domain 2... so "M" stays in the routing tables due to mutual redistribution.
    You don't have similar (flap) issue with network "N", because admin distance is lower for Domain 1, so both routers would never prefer OSPF via Domain 2! But having no issue with route flaps, you still will observe routing loop if you stop advertising "N" from C.

  • Trying to understand my Macbook Storage Information

    Hi
    Apologies if this is a newbie question but just got my Macbook Pro a couple of days ago and still trying to get to grips with it.
    Having switched over from windows, i reviewed the storage info tab to understand how much of my 500gb Harddrive i had used up and received the following info:
    54.5gb Music
    62.4gb Movies
    33.08gb Photos
    4.61gb Apps
    44.43gb Backups
    21.31gb Other
    I suppose my questions are:
    Why do i have 44.43gb of backup files as i have set up time capsule and my understanding was that all the backups would be stored there
    Why has IMovie massively inflated the size of my video files (e.g. i imported a 16gb SD card and the file is showing as 32gb on my Macbook harddrive)
    What consitutes 'Other'
    Again, apologies if stupid questions but all help appreciated!
    Billy

    i have the same questions as yours. but i dot get your answer. can you help me? im also wondering if the backup files are essential, since i already have an external harddrive where i put all my files. thank you!

  • Trying to understand how pics are stored

    I'm new to Mac (iMac Core i5 27") and trying to understand how pictures are stored. I've searched and found some posts that are relevant, but I really wanted to try and confirm a couple of things that still aren't clear to me.
    First question is: Are the pictures in Events, Photos and Albums all the same photos?
    Second question: If I import pictures from my external drive into iPhoto, are they stored on the iMac hard drive in the Pictures folder, and just viewed in iPhoto?
    Thanks!

    First question is: Are the pictures in Events, Photos and Albums all the same photos?
    Yes. Iphoto works on a Library bsis. Every photo is in the Library. Events and Photos are both views of the Library. Albums reference photos in the Library. A photo can be in many Albums and use no extra disk space.
    If I import pictures from my external drive into iPhoto, are they stored on the iMac hard drive in the Pictures folder, and just viewed in iPhoto?
    Yes, by default. There are other options - for instance, iPhoto integrates with almost every app on your Mac, you can store the Library on an external etc etc. If you tell us what you'd like to achieve we may be able to help you.
    Regards
    TD

  • Trying to Understand Color Management

    The title should have read, "Trying to Understand Color Management: ProPhoto RGB vs, Adobe RGB (1998) my monitor, a printer and everything in between." Actually I could not come up with a title short enough to describe my question and even this one is not too good. Here goes: The more I read about Color Management the more I understand but also the more I get confused so I thouht the best way for me to understnand is perhaps for me to ask the question my way for my situation.
    I do not own an expensve monitor, I'd say middle of the road. It is not calibrated by hardware or any sophisticated method. I use a simple software and that's it. As to my printer it isn't even a proper Photo filter. My editing of photos is mainly for myself--people either view my photos on the net or on my monitor. At times I print photos on my printer and at times I print them at a Print Shop. My philosophy is this. I am aware that what I see on my monitor may not look the same on someone else's monitor, and though I would definitely like if it it were possible, it doesn't bother me that much. What I do care about is for my photos to come close enough to what I want them to be on print. In other words when the time comes for me to get the best colors possible from a print. Note here that I am not even that concerned with color accuracy (My monitor colors equalling print colors since I know I would need a much better monitor and a calibrated one to do so--accurately compare) but more rather concerned with color detail. What concerns me, is come that day when I do need to make a good print (or afford a good monitor/printer) then I have as much to work with as possible. This leads me to think that therefore working with ProPhoto RGB is the best method to work with and then scale down according to needs (scale down for web viewing for example). So I thought was the solution, but elsewhere I read that using ProPhoto RGB with a non-pro monitor like mine may actually works against me, hence me getting confused, not understanding why this would be so and me coming here. My goal, my objective is this: Should I one day want to print large images to present to a gallery or create a book of my own then I want my photos at that point in time to be the best they can be--the present doesn't worry me much .Do I make any sense?
    BTW if it matters any I have CS6.

    To all of you thanks.                              First off yes, I now have begun shooting in RAW. As to my future being secure because of me doing so let me just say that once I work on a photo I don't like the idea of going back to the original since hours may have been spent working on it and once having done so the original raw is deleted--a tiff or psd remains. As to, "You 're using way too much club for your hole right now."  I loved reading this sentence :-) You wanna elaborate? As to the rest, monitor/printer. Here's the story: I move aroud alot, and I mean a lot in other words I may be here for 6 months and then move and 6 months later move again. What this means is that a printer does not follow me, at times even my monitor will not follow me so no printer calbration is ever taken into consideration but yes I have used software monitor calibration. Having said this I must admit that time and again I have not seen any really noticeale difference (yes i have but only ever so slight) after calibrating a monitor (As mentioned my monitors, because of my moving are usually middle of the road and limited one thing I know is that 32bits per pixel is a good thing).  As to, "At this point ....you.....really don't understand what you are doing." You are correct--absolutely-- that is why I mentioned me doing a lot of reading etc. etc. Thanks for you link btw.
    Among the things I am reading are, "Color Confidence  Digital Photogs Guide to Color Management", "Color Management for Photographers -Hands on Techniques for Photoshop Users", "Mastering Digital Printing - Digital Process and Print Series" and "Real World Color Management - Industrial Strength Production Techniques" And just to show you how deep my ignorance still is, What did you mean by 'non-profiled display' or better still how does one profile a display?

  • Trying to understand PersistenceContextType.EXTENDED

    Hello,
    I'm trying to understand the PersistenceContextType.EXTENDED setting. It is my understanding that when I put this in my class, it means that the container managed transaction will span multiple method calls.
    I have a stateful bean:
    @Stateful(name = "StatefulContainerSessionEJB")
    public class StatefulContainerSessionEJBBean implements StatefulContainerSessionEJB, StatefulContainerSessionEJBLocal {
    @PersistenceContext(unitName = "Model",
    type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    // implied REQUIRED transaction
    public void createAddress() {
    try {
    // create a new address
    Address addr = new Address();
    //... set addr fields
    System.out.println("Persist addr at: " + addr.getStreet());
    em.persist(addr);
    System.out.println("Persist of addr complete");
    } catch (Exception ex) {
    ex.printStackTrace();
    // transaction should still be alive here
    @Remove // this is what causes the transaction to end
    public void cancelTransaction() {
    System.out.println("Transaction rolledback");
    em.getTransaction().setRollbackOnly();
    And in my main code I have:
    StatefulContainerSessionEJB statefulContainerSessionEJB;
    public String createAddressThruStateful() {
    System.out.println("Create Address Thru Stateful Button hit....");
    try {
    final Context context = getInitialContext();
    statefulContainerSessionEJB = (StatefulContainerSessionEJB)context.lookup("StatefulContainerSessionEJB");
    statefulContainerSessionEJB.createAddress();
    } catch (Exception ex) {
    ex.printStackTrace();
    return null;
    public String cancelPartTimeTransaction() {
    System.out.println("Cancel Address Thru Stateful Button hit....");
    try {
    statefulContainerSessionEJB.cancelTransaction();
    } catch (Exception ex) {
    ex.printStackTrace();
    return null;
    First question is: am I using the Stateful bean correctly in main? i.e. its a class level variable and get instantiated in the createAddressThruStateful call.
    My second question is: when I run this code, my address get commited to the DB at the end of the createAddress() call. I didn't expect this to happen.
    Please straigten me out.
    Thanks,
    Chris

    please tell me what the inout in this coode means:A parameter that supplies input as well as accepts output is an INOUT parameter.

Maybe you are looking for