Bitwise inversion question.

Here is a question that I would like a bit of explaining. I got it right but I get lost in the logic & would really appreciate your help.
public class test {
     public static void main(String args[]) {
          byte x = 3;
          x = (byte)~x;
          System.out.println(x);
}I understand that x inverted goes from 00000011 to[b] 11111100
which would give a minus figure.
But I get a little lost with 2's compliment etc. Thanks for your help it's always appreciated, Dave.

I thought I had asked a similar question a while back. I have just looked and found this http://forum.java.sun.com/thread.jsp?forum=54&thread=385651, so I will be able to work it out myself, hopefully.

Similar Messages

  • Bitwise operator question...

    Hi,all.
    I have been using Java for around 1 year, but still not very comfortable with bitwise problems. I wish some one can give me a thorough explanation for these. Better yet, if you know how to use bitwise operator to solove the real programming problem, please post here too. I am doing some testing now. I will post some coding example later here, so you may help me on these coding too.
    Everyone have a great sunday night.

    Integers are represented by 32-bit values. You can have either 'signed' or 'unsigned' integer ranges.
    signed int 32000 to - 32000
    unsigned int 0 to 65000
    (note, these are approximations)
    For signed interpretations, the last digit on the left hand side represents wether or not the value is negative (called the most significant bit, msb).
    1000 0000
    The '1' is the msb
    Steps to convert -14 to its binary representation (using 2s complement as explained by above post):
    1. write the binary representation of '1' (ignore negative for now)
    2. invert the values
    3. add 1
    1) 0000 1110
    2) 1111 0001
    3) 1111 0010
    As you can see, the msb is a '1', so therefore it represents a negative value
    Note: In binary addition,
    1 + 1 = 0 carry 1
    1 + 0 = 1
    0 + 0 = 0
    As for other applications of this, I find myself using bitwise calculations in graphical work.
    As stated above (I love to reiterate others words, cause it's fun!!), Assembler language programming is a fantastic way to become an expert in the art of bitwise manipulation (if you have the patience).

  • Strange Port behavior

    Hi there,
    I have a situation where I'm trying to run a subversion server using the built in svn protocol on my Mac Mini. I want to make the subversion repositories available to the internet. However, I cannot use my domain name to access the svn port.
    My router uses the Mini as the DMZ server so all ports get forwarded to the mini. My IP address which is mapped to my domain name, does hit apache and tomcat. They work fine.
    However, I cannot see port 3690 from my domain name. The svnserver IS listening though, because I see it in the netstat listing. AND I can hit the subversion server if I use the bonjour name for the machine BUT NOT if I use my domain name.
    This shows up with a simple port scan. For port 80 (or 8080), I can see the open port if I scan against: My Domain Name, the WAN IP of my router, the LAN IP of the router, or the bonjour name (cooper.local)
    On the other hand, a port scan for port 3690, shows an open port for: my LAN IP of the Router, and the Bonjour name. BUT NOT for either my Domain Name or my WAN IP.
    This seems like a mis-feature. Does anyone know how to make the port visible beyond my subnet?
    I've tried explicitly adding the port to the router, without any luck. I don't think the problem is in the router. It seems to be special behavior for this one particular service.
    Thanks,
    John Schank

    Hey Matt,
    Thanks for the suggestion...
    I tried a different port, and still the same behavior. (I tried 8888, as well as a port < 1000) No difference. This is truly magical, somehow, it seems, that the way subversion creates a listener socket is immune to port forwarding. Perhaps I should ask the inverse question? Suppose I wanted to have a service that could not be used via a port forwarding router, only on the local subnet... How would I do that? Maybe the answer to that question would be a clue.
    Thanks again,
    John

  • Applications window scrolls to top

    This has been annoying me for some time, perhaps back to 10.4 (I'm using 10.4.7). When I view any folder on my system drive -- for example, /Applications or /Library -- the window is scrolled to the top, regardless of where I left it when last looking at it.
    When I view any folder on my second internal drive -- for example, ~/Desktop or ~/Documents (yes, I've moved my Home folder) -- the window retains the position it was at when I last looked at it.
    How might I correct the behavior on the first drive, so that the views retain their previous positions?
    PowerMac G4 (Sawtooth)   Mac OS X (10.4.7)  

    This is incredible. I have the exact inverse question. I have this annoying thing where the window goes back to where I scrolled too and the rest start at the top. It's my applications folder that stays where I left off and I want it to start at the top of the window every time I click. Anyone, how do we fix this problem?

  • Will the MBA Magsafe adapter/power supply work on an MBP?

    Similar, but inverse question to this thread:http://discussions.apple.com/thread.jspa?messageID=6167022
    From looking at several high res photos online, I can see that the MBA angled plug may be physically identical to that of the MB and MBP. So, mechanically it's all good. What about electrically?
    I don't really care about needing to charge the battery at the same rate as the 85w supply. I just want something smaller for use during class (3-4 hours); I usually charge my Macbook Pro overnight anyway (or keep it plugged in).
    Thoughts? I suppose I'll have to go to an Apple Store with my 'Pro and figure it out in person otherwise.

    I'd be VERY wary of doing this. No problem using an MBP charger (with its higher possible capacity ) on the MBA, but using a lower capacity power adaptor might result in overheating and consequent damage to the adaptor, with the further slight possibility of shorting resulting in the high voltage side making its way to your MBP , with disasterous results .
    Pulling out the battery when using the MBA adaptor might let you just about get away with running it safely, but I personally wouldn't risk it. I don't even like to use our 60W MacBook chargers on the MBP.
    (I see in the other thread that some people are suggesting that the 85W charger might "overpower" the other computers. As ricktoronto has suggested there, this simply isn't so. The excess ability simply won't be used.
    Using a low capacity charger / supply, like the MBA one, in a situation of high demand , on the other hand, can result in real problems, Overload can result in internal component failure or fire in the power supply. My bet is that Apple products will be pretty well protected against such things though).
    Cheers
    Rod
    Message was edited by: Rod Hagen

  • Bitwise Operators .. stupid question ?

    I have a HashMap which contains Strings.. The String are actually numbers.
    I want to use Bitwise operators of and or, xor on them... How so I do it?
    I want to say result = string1 and string2...... I am not sure how stupid this sounds... but I am sure I am too close to it....

    Example answer to your original question, wanna:
    Hashtable<Object, String> ht = new Hashtable<Object, String>();
    ht.put("twenty eight", "28");
    ht.put("five", "5");
    ht.put("twenty", "20");
    System.out.println(Integer.valueOf(ht.get("twenty eight")) & Integer.valueOf(ht.get("five")) & Integer.valueOf(ht.get("twenty")));Which, of course, prints "4" to standard out. You're best off using a Hashtable<Object, Integer>, though.
    In answer to your second question,
    ~ is indeed the compliment operator. E.g:
    System.out.println(1 + ", " + (~1));
    prints "1, -2", as you might expect.

  • Question for inverse routine in the transformation for a virtual infocube

    Hello,
    I have a virtual infocube with it transformation and i want to know if there is a way to know in the inverse routine or in the start routine if in the query the user is selecting a filter value.
    When the user want to select one value for a characteristic the transformation is executed in order to obtain the infoprovider values. In this moment the inverse routine is executed and i want to know if there is a way to identify when the user is making a drilldown or a filter.
    In the start routine of the transformation i have an abap code with a very expensive time of execution to obtain some key figures values but i dont want that this abap code will be executed when the system executed the transformation to obtain the values for one characteristic in order to make a filter.
    Thanks in advance for your help. 
    Best regards.
    Ignacio

    Hi Ashish,
    Your document is very useful, in fact I used it to write my abap code, now Iu2019m getting the values of the query selection conditions, but what I need is after the report is showed in the first time.
    When data in report is showed, user has the possibility to filter values for one characteristic. If the characteristic has the option for filter value selection during query execution equal to u201COnly Values in infoprovideru201D , the inverse routine is executed to obtain the values that are contained in the infoprovider and then the system show all the possible values. What I need is to identify this event, because in this case I donu2019t want to make some calculations in the transformation that are very expensive in time.
    Thanks a lot for your information.
    Best regards
    Ignacio

  • Bitwise in SQL Question

    I need to run a query that returns 3 values.
    1.     All numbers between 0 and 65535 where the Most Significant Byte (MSB) is Greater then the Least Significant Byte (LSB). As AAA
    2.     MSB * LSB as BBB
    3.     Dense Rank order by BBB, AAA
    Any help on this would be great!

    Or, if you want to work more "Bitwise" then how about using the built in bit operator functions...
    with t as (select rownum - 1 num from dual connect by level <=65536)
        ,t1 as (select num, bitand(num,65280)/256 as msb, bitand(num,255) as lsb from t)
    select AAA, BBB, msb, lsb, dense_rank() over (order by bbb,aaa) as dr
    from (
      select case when msb > lsb then num else null end as AAA
            ,msb*lsb as BBB
            ,msb, lsb
      from t1
    where AAA is not null
    order by dr

  • Fast Inverse Square Root

    I expect no replies to this thread - because there are no
    answers, but I want to raise awareness of a faculty of other
    languages that is missing in Flash that would really help 3D and
    games to be built in Flash.
    Below is an optimisation of the Quake 3 inverse square root
    hack. What does it do? Well in games and 3D we use a lot of vector
    math and that involves calculating normals. To calculate a normal
    you divide a vector's parameters by it's length, the length you
    obtain by pythagoras theorem. But of course division is slow - if
    only there was a way we could get 1.0/Math.sqrt so we could just
    multiply the vector and speed it up.
    Which is what the code below does in Java / Processing. It
    runs at the same speed as Math.sqrt, but for not having to divide,
    that's still a massive speed increase.
    But we can't do this in Flash because there isn't a way to
    convert a Number/float into its integer-bits representation. Please
    could everyone whinge at Adobe about this and give us access to a
    very powerful tool. Even the guys working on Papervision are having
    trouble with this issue.

    that's just an implementation of newton's method for finding
    the zeros of a differentiable function. for a given x whose inverse
    sq rt you want to find, the function is:
    f(y) = 1/(y*y) - x;
    1. you can find the positive zero of f using newton's method.
    2. you only need to consider values of x between 1 and 10
    because you can rewrite x = 10^^E * m, where 1<=m<10.
    3. the inverseRt(x) = 10^^(-E/2) * inverseRt(m)
    4. you don't have to divide E by 2. you can use bitwise shift
    to the right by 1.
    5. you don't have to multiply 10^^(-E/2) by inverseRt(m): you
    can use a decimal shift of inverseRt(m);
    6. your left to find the positive zero of f(y) = 1/(y*y) - m,
    1<=m<10.
    and at this point i realized what, i believe, is a much
    faster way to find inverse roots: use a look-up table.
    you only need a table of inverse roots for numbers m,
    1<m<=10.
    for a given x = 10^^E*m = 10^^(e/2) *10^^(E-e/2)*m, where e
    is the largest even integer less than or equal to E (if E is
    positive, e is the greatest even integer less than or equal to E,
    if E is negative), you need to look-up, at most, two inverse roots,
    perform one multiplication and one decimal shift:
    inverseRt(x) = 10^^(-e) * inverseRt(10) *inverseRt(m), if
    E-e/2 = 1 and
    inverseRt(x) = 10^^(-e) * inverseRt(m), if E-e/2 = 0.

  • To RAID or not to RAID, that is the question

    People often ask: Should I raid my disks?
    The question is simple, unfortunately the answer is not. So here I'm going to give you another guide to help you decide when a raid array is advantageous and how to go about it. Notice that this guide also applies to SSD's, with the expection of the parts about mechanical failure.
     What is a RAID?
     RAID is the acronym for "Redundant Array of Inexpensive Disks". The concept originated at the University of Berkely in 1987 and was intended to create large storage capacity with smaller disks without the need for very expensive and reliable disks, that were very expensive at that time, often a tenfold of smaller disks. Today prices of hard disks have fallen so much that it often is more attractive to buy a single 1 TB disk than two 500 GB disks. That is the reason that today RAID is often described as "Redundant Array of Independent Disks".
    The idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. Note that 'Spanning' is not in any way comparable to RAID, it is just a way, like inverse partitioning, to extend the base partition to use multiple disks, without changing the method of reading and writing to that extended partition.
     Why use a RAID?
     Now with these lower disks prices today, why would a video editor consider a raid array? There are two reasons:
    1. Redundancy (or security)
    2. Performance
    Notice that it can be a combination of both reasons, it is not an 'either/or' reason.
     Does a video editor need RAID?
    No, if the above two reasons, redundancy and performance are not relevant. Yes if either or both reasons are relevant.
    Re 1. Redundancy
    Every mechanical disk will eventually fail, sometimes on the first day of use, sometimes only after several years of usage. When that happens, all data on that disk are lost and the only solution is to get a new disk and recreate the data from a backup (if you have one) or through tedious and time-consuming work. If that does not bother you and you can spare the time to recreate the data that were lost, then redundancy is not an issue for you. Keep in mind that disk failures often occur at inconvenient moments, on a weekend when the shops are closed and you can't get a replacement disk, or when you have a tight deadline.
    Re 2. Performance
    Opponents of RAID will often say that any modern disk is fast enough for video editing and they are right, but only to a certain extent. As fill rates of disks go up, performance goes down, sometimes by 50%. As the number of disk activities on the disk go up , like accessing (reading or writing) pagefile, media cache, previews, media, project file, output file, performance goes down the drain. The more tracks you have in your project, the more strain is put on your disk. 10 tracks require 10 times the bandwidth of a single track. The more applications you have open, the more your pagefile is used. This is especially apparent on systems with limited memory.
    The following chart shows how fill rates on a single disk will impact performance:
    Remember that I said previously the idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. That means a RAID will not fill up as fast as a single disk and not experience the same performance degradation.
    RAID basics
     Now that we have established the reasons why people may consider RAID, let's have a look at some of the basics.
    Single or Multiple? 
    There are three methods to configure a RAID array: mirroring, striping and parity check. These are called levels and levels are subdivided in single or multiple levels, depending on the method used. A single level RAID0 is striping only and a multiple level RAID15 is a combination of mirroring (1) and parity check (5). Multiple levels are designated by combining two single levels, like a multiple RAID10, which is a combination of single level RAID0 with a single level RAID1.
    Hardware or Software? 
    The difference is quite simple: hardware RAID controllers have their own processor and usually their own cache. Software RAID controllers use the CPU and the RAM on the motherboard. Hardware controllers are faster but also more expensive. For RAID levels without parity check like Raid0, Raid1 and Raid10 software controllers are quite good with a fast PC.
    The common Promise and Highpoint cards are all software controllers that (mis)use the CPU and RAM memory. Real hardware RAID controllers all use their own IOP (I/O Processor) and cache (ever wondered why these hardware controllers are expensive?).
    There are two kinds of software RAID's. One is controlled by the BIOS/drivers (like Promise/Highpoint) and the other is solely OS dependent. The first kind can be booted from, the second one can only be accessed after the OS has started. In performance terms they do not differ significantly.
    For the technically inclined: Cluster size, Block size and Chunk size
     In short: Cluster size applies to the partition and Block or Stripe size applies to the array.
    With a cluster size of 4 KB, data are distributed across the partition in 4 KB parts. Suppose you have a 10 KB file, three full clusters will be occupied: 4 KB - 4 KB - 2 KB. The remaining 2 KB is called slackspace and can not be used by other files. With a block size (stripe) of 64 KB, data are distributed across the array disks in 64 KB parts. Suppose you have a 200 KB file, the first part of 64 KB is located on disk A, the second 64 KB is located on disk B, the third 64 KB is located on disk C and the remaining 8 KB on disk D. Here there is no slackspace, because the block size is subdivided into clusters. When working with audio/video material a large block size is faster than smaller block size. Working with smaller files a smaller block size is preferred.
    Sometimes you have an option to set 'Chunk size', depending on the controller. It is the minimal size of a data request from the controller to a disk in the array and only useful when striping is used. Suppose you have a block size of 16 KB and you want to read a 1 MB file. The controller needs to read 64 times a block of 16 KB. With a chunk size of 32 KB the first two blocks will be read from the first disk, the next two blocks from the next disk, and so on. If the chunk size is 128 KB. the first 8 blocks will be read from the first disk, the next 8 block from the second disk, etcetera. Smaller chunks are advisable with smaller filer, larger chunks are better for larger (audio/video) files.
    RAID Levels
     For a full explanation of various RAID levels, look here: http://www.acnc.com/04_01_00/html
    What are the benefits of each RAID level for video editing and what are the risks and benefits of each level to help you achieve better redundancy and/or better performance? I will try to summarize them below.
    RAID0
     The Band AID of RAID. There is no redundancy! There is a risk of losing all data that is a multiplier of the number of disks in the array. A 2 disk array carries twice the risk over a single disk, a X disk array carries X times the risk of losing it all.
    A RAID0 is perfectly OK for data that you will not worry about if you lose them. Like pagefile, media cache, previews or rendered files. It may be a hassle if you have media files on it, because it requires recapturing, but not the end-of-the-world. It will be disastrous for project files.
    Performance wise a RAID0 is almost X times as fast as a single disk, X being the number of disks in the array.
    RAID1
     The RAID level for the paranoid. It gives no performance gain whatsoever. It gives you redundancy, at the cost of a disk. If you are meticulous about backups and make them all the time, RAID1 may be a better solution, because you can never forget to make a backup, you can restore instantly. Remember backups require a disk as well. This RAID1 level can only be advised for the C drive IMO if you do not have any trust in the reliability of modern-day disks. It is of no use for video editing.
    RAID3
    The RAID level for video editors. There is redundancy! There is only a small performance hit when rebuilding an array after a disk failure due to the dedicated parity disk. There is quite a perfomance gain achieveable, but the drawback is that it requires a hardware controller from Areca. You could do worse, but apart from it being the Rolls-Royce amongst the hardware controllers, it is expensive like the car.
    Performance wise it will achieve around 85% (X-1) on reads and 60% (X-1) on writes over a single disk with X being the number of disks in the array. So with a 6 disk array in RAID3, you get around 0.85x (6-1) = 425% the performance of a single disk on reads and 300% on writes.
    RAID5 & RAID6
     The RAID level for non-video applications with distributed parity. This makes for a somewhat severe hit in performance in case of a disk failure. The double parity in RAID6 makes it ideal for NAS applications.
    The performance gain is slightly lower than with a RAID3. RAID6 requires a dedicated hardware controller, RAID5 can be run on a software controller but the CPU overhead negates to a large extent the performance gain.
    RAID10
     The RAID level for paranoids in a hurry. It delivers the same redundancy as RAID 1, but since it is a multilevel RAID, combined with a RAID0, delivers twice the performance of a single disk at four times the cost, apart from the controller. The main advantage is that you can have two disk failures at the same time without losing data, but what are the chances of that happening?
    RAID30, 50 & 60
     Just striped arrays of RAID 3, 5 or 6 which doubles the speed while keeping redundancy at the same level.
    EXTRAS
     RAID level 0 is striping, RAID level 1 is mirroring and RAID levels 3, 5 & 6 are parity check methods. For parity check methods, dedicated controllers offer the possibility of defining a hot-spare disk. A hot-spare disk is an extra disk that does not belong to the array, but is instantly available to take over from a failed disk in the array. Suppose you have a 6 disk RAID3 array with a single hot-spare disk and assume one disk fails. What happens? The data on the failed disk can be reconstructed in the background, while you keep working with negligeable impact on performance, to the hot-spare. In mere minutes your system is back at the performance level you were before the disk failure. Sometime later you take out the failed drive, replace it for a new drive and define that as the new hot-spare.
    As stated earlier, dedicated hardware controllers use their own IOP and their own cache instead of using the memory on the mobo. The larger the cache on the controller, the better the performance, but the main benefits of cache memory are when handling random R+W activities. For sequential activities, like with video editing it does not pay to use more than 2 GB of cache maximum.
    REDUNDANCY(or security)
    Not using RAID entails the risk of a drive failing and losing all data. The same applies to using RAID0 (or better said AID0), only multiplied by the number of disks in the array.
    RAID1 or 10 overcomes that risk by offering a mirror, an instant backup in case of failure at high cost.
    RAID3, 5 or 6 offers protection for disk failure by reconstructing the lost data in the background (1 disk for RAID3 & 5, 2 disks for RAID6) while continuing your work. This is even enhanced by the use of hot-spares (a double assurance).
    PERFORMANCE
     RAID0 offers the best performance increase over a single disk, followed by RAID3, then RAID5 amd finally RAID6. RAID1 does not offer any performance increase.
    Hardware RAID controllers offer the best performance and the best options (like adjustable block/stripe size and hot-spares), but they are costly.
     SUMMARY
     If you only have 3 or 4 disks in total, forget about RAID. Set them up as individual disks, or the better alternative, get more disks for better redundancy and better performance. What does it cost today to buy an extra disk when compared to the downtime you have when a single disk fails?
    If you have room for at least 4 or more disks, apart from the OS disk, consider a RAID3 if you have an Areca controller, otherwise consider a RAID5.
    If you have even more disks, consider a multilevel array by striping a parity check array to form a RAID30, 50 or 60.
    If you can afford the investment get an Areca controller with battery backup module (BBM) and 2 GB of cache. Avoid as much as possible the use of software raids, especially under Windows if you can.
    RAID, if properly configured will give you added redundancy (or security) to protect you from disk failure while you can continue working and will give you increased performance.
    Look carefully at this chart to see what a properly configured RAID can do to performance and compare it to the earlier single disk chart to see the performance difference, while taking into consideration that you can have one disks (in each array) fail at the same time without data loss:
    Hope this helps in deciding whether RAID is worthwhile for you.
    WARNING: If you have a power outage without a UPS, all bets are off.
    A power outage can destroy the contents of all your disks if you don't have a proper UPS. A BBM may not be sufficient to help in that case.

    Harm,
    thanks for your comment.
    Your understanding  was absolutely right.
    Sorry my mistake its QNAP 639 PRO, populated with 5 1TB, one is empty.
    So for my understanding, in my configuration you suggest NOT to use RAID-0. Im not willing to have more drives in my workstation becouse if my projekts are finished, i archiv on QNAP or archiv on other external drive.
    My only intention is to have as much speed and as much performance as possible during developing a projekt 
    BTW QNAP i also use as media-center in combination with Sony PS3 to run the encoded files.
    For my final understanding:
    C:  i understand
    D: i understand
    E and F: does it mean, when i create a projekt on E, all my captured and project-used MPEG - files should be situated in F?  Or which media in F you mean?
    Following your suggestions in want to rebulid Harms-Best Vista64-Benchmark comp to reach maximum speed and performance. Can i use in general the those hardware components (exept so many HD drives and exept Areca raid controller ) in my drive configuration C to F. Or would you suggest some changings in my situation?

  • Question re: Filesystems and Disks

    Hi all,
    I am working with VirtualBox running Solaris 64 on my Core i7 w/ 6GB RAM in Windows 7 right now trying to get some basics under the belt and I'm a bit confused so I'm trying to validate some of the knowledge I think I have absorbed:
    At installation, the virtual machine was created with only one disk (Solaris64.vdi), so there was only one set of device files created in /dev/rdsk when devfsadm was run at boot (where/how it is called I'm uncertain). So now c1t0d0 and all of its partitions/slices are in /dev/rdsk if I am not mistaken about how new devices are detected/added to the system.
    I have since added a few hard disks to the configuration and rebooted the system. I would like to mount /export/home on its own device, so c1t1d0 exists and needs to have /export/home mounted on it, instead of the same device that runs the operating system (I eventually would like to get swap on its own disk as well; I think that implies moving /var and one other fs to it's own disk, but unrelated at the moment).
    If I am not mistaken, /etc/mnttab and /etc/vfstab do not automatically get updated with new devices because there is no filesystem attached. A disk needs to be formatted with at least one partition, which is itself made up of 8 slices that can be allocated different amounts of storage. For a given partition, one and only one filesystem can be appended to a slice on that partition. The format utility uses the partition argument to print information about each of the slices and lets you modify how space is allocated to each slice on the disk. You have to mount filesystems to each slice manually later.
    A disk must be formatted using fdisk from format before filesystems can be mounted. I select to create a new partition and choose type 1=SOLARIS2 option utilizing 100% of the space on the disk for the partition and setting the partition to active. I then save the changes and exit to confirm the new partition. Using partition I can print and see:
    > print
    Current partition table (original):
    Total disk cylinders available: 1302 + 2 (reserved cylinders)
    Part      Tag    Flag     Cylinders        Size            Blocks
    0 unassigned    wm       0               0         (0/0/0)           0
    1 unassigned    wm       0               0         (0/0/0)           0
    2     backup    wu       0 - 1301        9.97GB    (1302/0/0) 20916630
    3 unassigned    wm       0               0         (0/0/0)           0
    4 unassigned    wm       0               0         (0/0/0)           0
    5 unassigned    wm       0               0         (0/0/0)           0
    6 unassigned    wm       0               0         (0/0/0)           0
    7 unassigned    wm       0               0         (0/0/0)           0
    8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
    9 unassigned    wm       0               0         (0/0/0)           0And this is where things get fuzzy. I've seen articles that describe the disk as having as few as four slices but mine were created with 9. This one already has a partition assigned to backup (not sure if this id is tied to a system function or not) and another for boot (same, not sure if this id is tied to a system function). The two assigned slices have the flag wu, the rest, which are allocated no space, to wm. So I guess my question is how can I design this aspect of my system best? Do I remove some of the slices (refered to as partitions by the utlity) that are empty and just make one big slice or do I need to keep the boot slice and reallocate it or just rename backup and then unmount and re-mount /export/home?
    Remember my goal is to, in practice, have an understanding of how disks are formatted and addressed by the Solaris/UNIX environment from start to finish and mount /export/home to it's own disk. If I am not mistaken, when I create new user accounts, I will still mount their fs at /export/home and need only to change where /export/home mounts.
    If anyone could direct me to some further reading, I'll be on docs.sun trying to find some of the answers, but so far the man page for format hasn't been much of a help for understanding the flags and how managing the slices in a partition affect/effect the design/implementation of the attached filesystems. Thanks for reading this lengthy question as well :)
    -T

    Darren,
    If you are still out there, maybe you can clear something else up: I mounted the new device in the */etc/vfstab* and */etc/mnttab* and if I try to access the device with format, I get a message informing me the device has mounted partitions. However, when I run a df -h or a df -v I see some startling data:
    # df -v
    ... output omitted ...
    /mnt       /dev/dsk/c1t1d0        0 -1155515  1155515     0%
    # df -h
    ... output omitted ...
    /dev/dsk/c1t1d0s0        0K 16384E   1.1G     0%    /mntAs you can see the use column of the above output shows -1155515 blocks used and the inverse free. Similarly, the second output shows the device has 16384E used and size is 0K with 1.1G avail. Additionally when I try to print data for the device in format > partition, I see this:
    > print
    Current partition table (original):
    Total disk cylinders available: 1302 + 2 (reserved cylinders)
    Part      Tag    Flag     Cylinders        Size            Blocks
    0 unassigned    wm       1 - 1250        9.58GB    (1250/0/0) 20081250
    1 unassigned    wm       0               0         (0/0/0)           0
    2     backup    wu       0 - 1301        9.97GB    (1302/0/0) 20916630
    3 unassigned    wm       0               0         (0/0/0)           0
    4 unassigned    wm       0               0         (0/0/0)           0
    5 unassigned    wm       0               0         (0/0/0)           0
    6 unassigned    wm       0               0         (0/0/0)           0
    7 unassigned    wm       0               0         (0/0/0)           0
    8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
    9 unassigned    wm       0               0         (0/0/0)           0Why does my device show that it is allocated cylinders 1-1250 for slice zero, but show 16384E for space? Did I miss something in the format process?

  • Remaining questions while evaluating JavaFX for a new project

    Dear forum members:
    currently I am evaluating the possibilities of next-generation GUI technologies, such as JavaFX, Silverlight and Flash/Flex, for a new project. To get a basic understanding of JavaFX's concepts, I worked through the available online text and video tutorials, and all the treated topics seem quite obvious/comprehensible to me +as long as one is only confronted to relatively static GUI component hierarchies+. But, as a newbie, some questions concerning more dynamically defined GUIs (i.e. dynamic JFX scripting*) still remain.
    Application scenario (exemplary):
    Say I want to create a "Online Shopping Application" that supports "+ShopOwners+" in *dynamically* defining the "+Shop Model+" structure, e.g. accepted visitor (client) categories, product categories their products, pricing information, payment methods, etc.
    Then, based on the dynamically defined model, the shop owner should be able to design and layout the necessary forms, such as order forms, survey/feedback forms, etc. This should be done in "design mode", and there should also exist a possibility for him/her to preview the specification results in a "preview mode".
    Finally, the shop owner must be able to save the model and forms on the server side in a way that can requested and run be the shopping app end users (the shop clients) via (another (?)) JavaFX frontend.
    _The still remaining questions for this scenario are:_
    +1. Is JavaFX appropriate for creating such kind of applications, especially when it comes to dynamic JFX scripting (and compilation) on the client side??? (By now I'm not quite sure if this is really necessary for my plans!)+
    +2. Concerning the ShopOwner's GUI with its design and preview mode (and knowing that the latter mode will be the GUI version presented to the shop clients in another JFX module):+
    +Is it possible to *dynamically *build up a +Scene Graph+ in a way that lets me handle and *compile* the corresponding +JFX Script+ on the client side for previewing it? Or is a client-server roundtrip absolutely necessary?
    How could one persist this JFX Script on the server side? I.e., which intermediary format would be the most appropriate? => XML, JSON, JFX Script?
    3. Concerning the "Shop Model", would I optimally create JFX classes or even Java Beans to bind to?
    4. And finally: What would be your recommended way (software architecture) to fulfill this task in JavaFX?
    Do there already exist some JFX components (dynamic forms/survey authoring modules, etc.) that persue a similar task and that I didn't find yet?
    As the clarification of the above-mentioned issues are very important for me, I hope that you more experienced users can help me, pointing me to a practicable approach.
    Thank you very much for any help and constructive tips in advance.
    Best regards
    Martin Meyers

    Q1: Do I optimally need 2 different custom components for each treated concept, or do I have just 1 component with 2 internal modes (design & preview/usage)??
    E.g., (a) FormSpec widget composed of LabelSpec, TextBoxSpec, ChooseBoxSpec,... widgets each having their preview pendants
    Form, Label, TextBox, ChooseBox, etc.
    versus
    +(b) only Form widget composed of Label, TextBox, ChooseBox widgets, but all having a "design/preview execution mode".+
    Closer to (b), I think, though each widget doesn't need to be modified to have design and preview modes. Instead, each widget can be wrapped within a Group to provide the design/preview functions without modifying the widget itself.
    The technique is as follows. Given a sequence of widgets (Nodes, really), for each widget, wrap it in a Group that contains that widget but with an overlay Rectangle in front of it. The Rectangle can be semi-transparent, or fully transparent if you prefer. (In the example below I've made it a semitransparent color to make its location obvious as well as to provide a highlight that signals design mode.) The overlay Rectangle is set up so that its dimensions will exactly track the dimensions (bounds) of the widget behind it. I've set blocksMouse to true so that when it's present, the overlay traps events and prevents interaction with the widget. There is a boolean variable previewMode, controlled by a CheckBox, that controls the visibility of these overlay rectangles. I've also added a bit of code to track mouse events on the overlay rectangles so that you can move the widgets around when you're in design mode.
    Note that the visible variable differs from transparent, i.e. opacity == 0.0. If a node is visible but is transparent, it is still eligible to receive events; whereas if visible is false, it does not receive events.
    Here's some code that illustrates this technique. I'll answer your other questions in a subsequent post.
    import javafx.stage.Stage;
    import javafx.scene.*;
    import javafx.scene.control.*;
    import javafx.scene.input.*;
    import javafx.scene.layout.*;
    import javafx.scene.shape.Rectangle;
    import javafx.scene.paint.Color;
    var previewMode = true;
    var lastX:Number;
    var lastY:Number;
    function wrap(n:Node):Node {
        Group {
            content: [
                n,
                Rectangle {
                    opacity: 0.2
                    fill: Color.web("#ffff00")
                    x: bind n.boundsInParent.minX
                    y: bind n.boundsInParent.minY
                    width: bind n.boundsInParent.width
                    height: bind n.boundsInParent.height
                    visible: bind previewMode
                    blocksMouse: true
                    onMousePressed: function(me:MouseEvent) {
                        lastX = me.x;
                        lastY = me.y;
                    onMouseDragged: function(me:MouseEvent) {
                        n.layoutX += me.x - lastX;
                        n.layoutY += me.y - lastY;
                        lastX = me.x;
                        lastY = me.y;
    var controlList:Node[] = [
        Button {
            layoutX: 140
            layoutY: 20
            text: "Button1"
            action: function() { println("Button1 clicked!"); }
        Slider {
            layoutX: 30
            layoutY: 60
            min: 0
            max: 100
            override var value on replace {
                println("Slider value is now {value}");
        Label {
            layoutX: 50
            layoutY: 100
            text: "Non-interactive label"
        CheckBox {
            layoutX: 40
            layoutY: 140
            text: "CheckBox"
            override var selected on replace {
                println("CheckBox is now {if (selected) "checked" else "unchecked"}");
    Stage {
        title: "Design vs Preview Mode"
        width: 400
        height: 250
        scene: Scene {
            content: [
                CheckBox {
                    layoutX: 10
                    layoutY: 10
                    text: "Preview Mode"
                    selected: bind previewMode with inverse
                Panel {
                    content: for (n in controlList) {
                        wrap(n)
    }

  • How to rename with inverse rating in the filename?

    This is a question about sorting and renaming.
    I have a bunch of sports images in Lightroom that are all tagged with an appropriate rating or 1-5 stars.  Because they are sports images, some are sequences of images taken at 8fps. I want the images to end up on my photo website (Smugmug) sorted first by rating, but for images of the same rating, I want them in chronological order (by date/time taken).
    My challenge here is that I'm uploading them to a website that doesn't know how to sort by rating.  In the past, I've renamed the images to include the rating as the first part of the filename, followed by the date/time and then I've sorted the gallery by descending filename.  This successfully gets the images in rating order, but it does not get images with the same rating in the right order (in fact it gets them in reverse order).
    Filenames would look like this:
    Force97-5-20100926182129_7220.JPG.
    "Force97" is the team name
    The "5" is the star rating
    The "20100926182129" is the date/time code
    The "7220" is the original file number (useful at max fps when date/time codes are identical)
    What I need to make these filenames sort properly is to be able to include the inverse rating rather than the actual rating so a five star image would get a filename with a 1, a four star image a 2, and so on.  Then, I could sort by lowest filename first and everything would sort properly.  Highest rated images would be first and within a given rating, the images would be in chronological order.
    So, an image rated five starts would come out like this:
    Force97-1-20100926182129_7220.JPG
    With the rating of 5 converted to a 1, rating of 4 convert to a 2, rating of 3 as 3 and so on...
    Does anyone have any idea how to do this in Lightroom?  My only idea so far is to output them as shown here in Lightroom and then write a python script to parse out the rating from the filename, reverse all the rating numbers and rename everything again.  I'd love to be able to do this within LR.  Have I made myself clear what I'm trying to accomplish?  Any ideas on how to solve this in LR?

    If you can write a Python script, then the best way I think would be for
    you to write a Lightroom export plugin instead, then rename as you see
    fit...
    Learning yet another language (Lua) and SDK is a possibility, but certainly not a quick way.

  • AES S-box question

    I am implementing Rijndael Advanced Encryption Standard (AES). In the specification document the S-box used in the transformation routine is presented. Does this mean that the S-box does not need to be kept secret? Or should I generate my own S-box and inverse S-box arrays? If so can any one point me to where I can find an algorithm to generate the S-box.
    Regards,
    Alex

    Not a Java question. You might try posting to sci.crypt on Usenet: http://groups.google.com/group/sci.crypt/search?q=AES+S-boxes&

  • Color Managed Printing from LR 1.3.1 Inverse of Proper PS CS3 10.0.1 Behavior

    Please excuse the length and detail of this post - I'm just trying to be very clear...
    Also, it would be helpful if anyone having definitive information about this topic could please email me directly in addition to replying to this forum topic, in order that I might know a response is available sooner (I am new to this forum, and may not check it regularly). My direct email address is [email protected].
    Bottom Line: Color managed printing using my own custom-generated profiles from LR 1.3.1 to my Epson 7600 (on Intel-based Mac OS X 10.5.2, but saw the same behavior with 10.5.1) using the current Epson 7600 Intel/10.5x-compatible driver (3.09) is broken, and appears to be doing the exact opposite (inverse) of what I would expect and what PS CS3 does properly.
    I am color management experienced, and have been using my custom-generated EPSON 7600 profiles with reliable soft proofing and printing success in PS (both CS2 and now CS3) for some time now. I know how the EPSON printer driver should be set relative to PS/LR print settings to indicate desired function. Images exported from LR to PS and printed from PS using "Photoshop Manages Color" and proper printer driver settings ("No Color Adjustment") print perfectly, so it isn't the Intel-based Mac, the OS, the driver, the profile, or me -- it is LR behaving badly.
    The specific behavior is that printing from LR using "Managed by Printer" with the EPSON driver's Color Management setting set properly to "Colorsync" prints a reasonable-looking print, about what you would expect for canned profiles from the manufacturer, and in fact identical to the results obtained printing the same image from PS using "Printer Manages Color". So far so good. Switching to my specific custom profiles in LR and printing with the driver's CM setting set properly to "No Color Adjustment" yields results that are clearly whacked, for both LR settings of "Perceptual" and "Relative CM". Just for completeness and out of curiosity, I tried printing from LR using the same profile (once for "Perceptual" and once more for "Relative CM") with the EPSON driver's CM setting set IMPROPERLY to "Colorsync", and the results were much more in line with what you would expect - I would almost say it was "correct" output. This is why I used the phrase "inverse of proper behavior" in the subject line of this topic. Going one step further, trying this same set of improper settings in PS (PS print settings set to "Photoshop Manages Color" with either Perceptual or Rel CM selected, but using "Colorsync" rather than "No Color Adjustment" in the Color Management pane of the EPSON printer driver) yields whacked results as you would expect that look identical to the whacked results obtained from LR using "proper" settings.
    I said above that the improper settings from LR yielded results that I would almost say were correct. "Almost" because the benchmark results rendered by PS using proper settings are slightly different - both "better" and closer to each other - than those rendered by LR using the improper settings. The diffs between the Perceptual and Rel CM prints from LR using improper settings showed more marked differences in tone/contrast/saturation than the diffs observed between the Perceptual and Rel CM prints from PS using proper settings - the image itself was in-gamut enough that diffs between Perceptual and Rel CM in the proper PS prints were quite subtle. Even though the improper LR prints were slightly inferior to the proper PS prints, the improper LR prints were still within tolerances of what you might expect, and still better (in terms of color matching) than the "Managed by Printer" print from LR. At first guess, I would attribute this (the improper LR prints being inferior to the proper PS prints) to the CMM being used by LR being different from (inferior to) the CMM I have selected for use in PS (that being "Adobe (ACE)"). I can live with the LR CMM being slightly different from that use

    (Here's the 2nd half of my post...)
    I said above that the improper settings from LR yielded results that I would almost say were correct. "Almost" because the benchmark results rendered by PS using proper settings are slightly different - both "better" and closer to each other - than those rendered by LR using the improper settings. The diffs between the Perceptual and Rel CM prints from LR using improper settings showed more marked differences in tone/contrast/saturation than the diffs observed between the Perceptual and Rel CM prints from PS using proper settings - the image itself was in-gamut enough that diffs between Perceptual and Rel CM in the proper PS prints were quite subtle. Even though the improper LR prints were slightly inferior to the proper PS prints, the improper LR prints were still within tolerances of what you might expect, and still better (in terms of color matching) than the "Managed by Printer" print from LR. At first guess, I would attribute this (the improper LR prints being inferior to the proper PS prints) to the CMM being used by LR being different from (inferior to) the CMM I have selected for use in PS (that being "Adobe (ACE)"). I can live with the LR CMM being slightly different from that used in PS - that is not the issue here. What is at issue is trying to determine why LR is clearly behaving differently than PS in this well-understood area of functionality, all other variables being the same. (And, incidentally, why am I not seeing other posts raising these same questions?)
    My "workaround" is to use "Managed by Printer" for printing rough prints from LR and to do all other printing from PS, especially given the noted diffs in CMM performance between LR and PS and the fact that printing from PS also supports using Photokit Sharpener for high-quality prints. Still it would be nice to understand why this is happening in LR and to be able to print "decent" prints directly from LR when it seemed appropriate.
    Any insights or suggestions will be very much appreciated. Please remember to reply to my direct email address ([email protected]) in addition to your public reply to this forum.
    Thank you!
    /eddie

Maybe you are looking for

  • TS1424 US iTunes in the UK

    I have a uk iTunes account, but I want to download music on the US version of iTunes, how can this be done?

  • SAP IHC payment order cannot be posted

    Hello experts, I am trying to configure SAP In-House cash functionality for my client, I am able to generate an outbound IDOC from the subsidiary and this outbound Idoc is also received in the IHC center successfully which leads to automatic creation

  • In EDI to IDOC or Idoc to EDI can we go for Dynamic recevier determination

    Hi all    I am doing an EDI to IDOC scenario involving SEEBURGER( AS2) adapter .can we go for extended receiver determination Regars Saurabh

  • I pod classic for I pod touch

    I have an I pod classic and it has warranty on it. Is it possible for me to turn the ipod classic in for I pod touch with the warranty I have?

  • Two iTunes accounts in family - balance issue

    I have the main Itunes account.  This is the one we purchase all movies, TV shows, etc. with and use for the AppleTV's.  My wife has an iTunes account for her iPhone, iPad and MBP which is also her email address.  She purchases a few apps here and th