What every developer should know about bitmaps

This isn't everything, but it is a good place to start if you are about to use bitmaps in your program. Original article (with bitmaps & nicer formatting) at Moderator edit: link removed
Virtually every developer will use bitmaps at times in their programming. Or if not in their programming, then in a website, blog, or family photos. Yet many of us don't know the trade-offs between a GIF, JPEG, or PNG file – and there are some major differences there. This is a short post on the basics which will be sufficient for most, and a good start for the rest. Most of this I learned as a game developer (inc. Enemy Nations) where you do need a deep understanding of graphics.
Bitmaps fundamentally store the color of each pixel. But there are three key components to this:
1.Storing the color value itself. Most of us are familiar with RGB where it stores the Red, Green, & Blue component of each color. This is actually the least effective method as the human eye can see subtle differences on some parts of the color spectrum more than others. It's also inefficient for many common operations on a color such as brightening it. But it is the simplest for the most common programming tasks and so has become the standard.
2.The transparency of each pixel. This is critical for the edge of non-rectangular images. A diagonal line, to render best, will be a combination of the color from the line and the color of the underlying pixel. Each pixel needs to have its level of transparency (or actually opacity) set from 0% (show the underlying pixel) to 100% (show just the pixel from the image).
3.The bitmap metadata. This is informat about the image which can range from color tables and resolution to the owner of the image.
Compression
Bitmaps take a lot of data. Or to be more exact, they can take up a lot of bytes. Compression has been the main driver of new bitmap formats over the years. Compression comes in three flavors, palette reduction, lossy & lossless.
In the early days palette reduction was the most common approach. Some programs used bitmaps that were black & white, so 1 bit per pixel. Now that's squeezing it out. And into the days of Windows 3.1 16 color images (4 bits/pixel) were still in widespread use. But the major use was the case of 8-bits/256 colors for a bitmap. These 256 colors would map to a palette that was part of the bitmap and that palette held a 24-bit color for each entry. This let a program select the 256 colors out of the full spectrum that best displayed the picture.
This approach was pretty good and mostly failed for flat surfaces that had a very slow transition across the surface. It also hit a major problem early on with the web and windowed operating systems – because the video cards were also 8-bit systems with a single palette for the entire screen. That was fine for a game that owned the entire screen, but not for when images from different sources shared the screen. The solution to this is a standard web palette was created and most browsers, etc. used that palette if there was palette contention.
Finally, there were some intermediate solutions such as 16-bits/pixel which did provide the entire spectrum, but with a coarse level of granularity where the human eye could see jumps in shade changes. This found little usage because memory prices dropped and video cards jumped quickly from 8-bit to 24-bit in a year.
Next is lossy compression. Compression is finding patterns that repeat in a file and then in the second case just point back to the first run. What if you have a run of 20 pixels where the only difference in the second run is two of the pixels are redder by a value of 1? The human eye can't see that difference. So you change the second run to match the first and voila, you can compress it. Most lossy compression schemes let you set the level of lossiness.
This approach does have one serious problem when you use a single color to designate transparency. If that color is shifted by a single bit, it is no longer transparent. This is why lossy formats were used almost exclusively for pictures and never in games.
Finally comes lossless. This is where the program compresses the snot out of the image with no loss of information. I'm not going to dive into what/how of this except to bring up the point that compressing the images takes substantially more time than decompressing them. So displaying compressed images – fast. Compressing images – not so fast. This can lead to situations where for performance reasons you do not want to store in a lossless format on the fly.
Transparency
Transparency comes in three flavors. (If you know an artist who creates web content – have them read this section. It's amazing the number who are clueless on this issue.) The first flavor is none – the bitmap is a rectangle and will obscure every pixel below it.
The second is a bitmap where a designated color value (most use magenta but it can be any color) means transparent. So other colors are drawn and the magenta pixels are not drawn so the underlying pixel is displayed. This requires rendering the image on a selected background color and the edge pixels that should be partially the image and partially the background pixel then are partially the background color. You see this in practice with 256 color icons where they have perfect edges on a white background yet have a weird white halo effect on their edges on a black background.
The third flavor is 8 bits of transparency (i.e. 256 values from 0 – 100%) for each pixel. This is what is meant by a 32-bit bitmap, it is 24-bits of color and 8 bits of transparency. This provides an image that has finer graduations than the human eye can discern. One word of warning when talking to artists – they can all produce "32-bit bitmaps." But 95% of them produce ones where every pixel is set to 100% opacity and are clueless about the entire process and the need for transparency. (Game artists are a notable exception – they have been doing this forever.) For a good example of how to do this right take a look at Icon Experience – I think their bitmaps are superb (we use them in AutoTag).
Resolution
Many formats have a resolution, normally described as DPI (Dots Per Inch). When viewing a photograph this generally is not an issue. But take the example of a chart rendered as a bitmap. You want the text in the chart to be readable, and you may want it to print cleanly on a 600 DPI printer, but on the screen you want the 600 dots that take up an inch to display using just 96 pixels. The resolution provides this ability. The DPI does not exist in some formats and is optional in others (note: it is not required in any format, but it is unusual for it to be missing in PNG).
The important issue of DPI is that when rendering a bitmap the user may want the ability to zoom in on and/or to print at the printer's resolution but display at a lower resolution – you need to provide the ability for the calling program to set the DPI. There's a very powerful charting program that is useless except for standard viewing on a monitor – because it renders at 96 DPI and that's it. Don't limit your uses.
File formats
Ok, so what file formats should you use? Let's go from most to least useful.
PNG – 32-bit (or less), lossless compression, small file sizes – what's not to like. Older versions of some browsers (like Internet Explorer) would display the transparent pixels with an off-white color but the newer versions handle it properly. Use this (in 32-bit mode using 8 bits for transparency) for everything.
ICO – This is the icon file used to represent applications on the desktop, etc. It is a collection of bitmaps which can each be of any resolution and bit depth. For these build it using just 32-bit png files from 16x16 up to 256x256. If your O/S or an application needs a lesser bit depth, it will reduce on the fly – and keep the 8 bits of transparency.
JPEG – 24-bit only (i.e. no transparency), lossy, small file sizes. There is no reason to use this format unless you have significant numbers of people using old browsers. It's not a bad format, but it is inferior to PNG with no advantages.
GIF – 8-bit, lossy, very small file sizes. GIF has two unique features. First, you can place multiple GIF bitmaps in a single file with a delay set between each. It will then play through those giving you an animated bitmap. This works on every browser back to the 0.9 versions and it's a smaller file size than a flash file. On the flip side it is only 8 bits and in today's world that tends to look poor (although some artists can do amazing things with just 8 bits). It also has a set color as transparent so it natively supports transparency (of the on/off variety). This is useful if you want animated bitmaps without the overhead of flash or if bandwidth is a major issue.
BMP (also called DIB) – from 1 up to 32-bit, lossless, large file sizes. There is one case to use this – when speed is the paramount issue. Many 2-D game programs, especially before the graphics cards available today, would store all bitmaps as a BMP/DIB because no decompression was required and that time saving is critical when you are trying to display 60 frames/second for a game.
TIFF – 32-bit (or less), lossless compression, small file sizes – and no better than PNG. Basically the government and some large companies decided they needed a "standard" so that software in the future could still read these old files. This whole argument makes no sense as PNG fits the bill. But for some customers (like the federal government), it's TIFF instead of PNG. Use this when the customer requests it (but otherwise use PNG).
Everything Else – Obsolete. If you are creating a bitmap editor then by all means support reading/writing every format around. But for other uses – stick to the 2+4 formats above.
Edited by: 418479 on Dec 3, 2010 9:54 AM
Edited by: Darryl Burke -- irrelevant blog link removed

I don't think the comment about jpeg being inferior to png and having no advantages is fair. The advantage is precisely the smaller file sizes because of lossy compression. Saving an image at 80-90% quality is virtually indistinguishable from a corresponding png image and can be significantly smaller in file size. Case in point, the rocket picture in that blog post is a jpeg, as is the picture of the blogger.
The statements about the TIFF format is slightly wrong. TIFF is sort of an all encompassing format that's not actually associated with any specific compression. It can be lossless, lossy, or raw. You can have jpeg, jpeg2000, lzw, packbits, or deflate (png) compressed tiff files. There's also a few compressions that specialize in binary images (used alot for faxes). In fact, the tiff format has a mechanism that allows you to use your own undefined compression. This flexibility comes at a price: not all image viewers can open a tiff file, and those that do may not be able to open all tiff files.
Ultimately though, the main reason people use TIFF is because of its multipage support (like a pdf file), because of those binary compressions (for faxes), and because of its ability include virtually any metadata about the image you want (ex: geographical information in a "GeoTIFF").

Similar Messages

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • 7 Things every Adobe AIR Developer should know about Security

    7 Things every Adobe AIR Developer should know about Security
    1. Your AIR files are really just zip files.
    Don't believe me? Change the .air extension to zip and unzip
    it with your favorite compression program.
    What does this mean for you the developer? What this means is
    that if you thought AIR was a compiled protected format, alas it is
    not.
    2. All your content is easily accessible in the AIR file.
    Since we now that the AIR file is really just a zip file,
    unzip it and see what's inside. If you have added any content
    references when you published the AIR file, voila, there it all is.
    What does this mean for you the developer? Well, you content
    is sitting there ripe for the picking, and so is everything else
    including you Application descriptor file, images etc.
    3. Code signing your Air app does nothing as far as security
    for you.
    All code signing your app does is verify to the end user that
    someone published the app. I does nothing as far as encryption and
    does nothing to project your content.
    What does this mean for you the developer? We'll you should
    still do it, because getting publisher "unknown" is worse. It also
    means that joe hacker would not be able decompile your entire app
    and republish it with the same certificate, unless they
    somehow got a hold of that too.
    4. All your AIR SWF content is easily decompilable.
    Nothing new here, it's always been this way. Type flash
    decompiler into google and you'll find a variety of decompilers for
    under $100 that will take your AIR content swf and expose all your
    source code and content in no time.
    What does this mean for you the developer? All you content,
    code, urls and intellectual property is publicly available to
    anyone with a decompiler, unless you do some extra work and encrypt
    your swf content files, which is not currently a feature of AIR,
    but can be done if you do your homework.
    5. Your SQLite databases are easy to get at.
    SQLite datatbases can be accessed from AIR or any other
    program on you computer that knows how to work with it. Unless you
    put your database in the local encrypted datastore, or encrypt your
    entire database it's pretty easy to get at, especially if you
    create it with a .db extension.
    What does this mean for you the developer? We'll SQLite is
    very useful, but just keep in mind that your data can be viewed and
    altered if you're not careful.
    6. The local encrypted datastore is useful, but....
    The local encrypted datastore is useful, but developers need
    a secure way of getting information into it. Storing usernames,
    passwords and urls in clear text is a bad idea, since as we
    discussed, you code is easy to decompile an read. By putting info
    into the local encrypted datastore, the data is encrypted and very
    difficult to get at. The problem is, how do you get it into there,
    without have to store any info that can be read in the air file and
    without the necessity of communicating with a web server? Even if
    you called a web service and pushed the returned values into the
    datastore, this is not ideal, since you may have encoded the urls
    to you web service into your code, or they intercept the results
    from the web service call.
    What does this mean for you the developer? Use the local
    datastore, and hope that we get some new ways of protecting content
    and data form Adobe in the next release of AIR.
    7. There are some things missing form the current version of
    AIR (1.1) that could really help ease the concerns of people trying
    to develop serious applications with AIR.
    Developers want more alternatives for the protection of local
    content and data. Some of us might want to protect our content and
    intellectual property, remember not all of us are building toys
    with AIR. Other than the local encrypted datastore there are not
    currently any built in options I'm aware of for encrypting other
    content in the AIR file, unless you roll your own.
    What does this mean for you the developer? We'll I've been
    told that Adobe takes security very seriously, so I'm optimistic
    that we'll see some improvements in this area soon. If security is
    a concern for you as much as it is for me, let them know.

    Putting "secret data" as a clear text directly in your code
    is a broken concept in every environment, programing language.
    Every compiled code is reversible, especially strings are really
    easy to extract.
    There is no simple, straightforward way to include secret
    data directly with your app. This is a complicated subject, and if
    you really need to do this, you'll need to read up on it a bit.
    But in most cases this can be avoided or worked around
    without compromising security. One of the best ways is to provide
    the user with a simple "secret key" alongside the app (best way is
    the good old login/password). The user installs the app, and
    provides his "secret key", that goes directly into
    EncryptedLocalStore, and then you use this "secret key" to access
    the "secret data" that's stored on your server. Then you can
    transfer the "secret data" directly into EncryptedLocalStore.
    As for the whole thread:
    Points 1-5 -> Those points do not concern AIR apps only.
    If you are developing an application in any language, you should
    follow those rules, meaning:
    - Code installed on users computer is easy accessible
    - Data stored locally is easy accessible, even if it
    encrypted using any symmetric-key encryption, because the
    encrypting algorithm and encryption key is in your source code (you
    could probably write a book on using public-key encryption so let's
    just leave it for now ;)
    Point 6 -> Is a valid one. All your app security should
    relay on the EncryptedLocalStore. But it is your job to get the
    data securely into the ELS, because there is no point to encrypt
    data that can be intercepted.

  • Black Magic Intensity Pro....anything I should know about?

    I will be getting the Intensity Pro within a couple days and I was wondering if anyone in here have used it or is using it. Is there anything that I should know about that I may have overlooked? 

    I just noticed that the screen does not sit securely
    when it's closed. There seems to be a 'bow' in the
    whole panel.
    Have you heard about this before?
    This is actually a common issue on powerbooks and macbook pros. Apple suggests not not worry about the "cosmetic defect", as it doesn't affect the performance and it was design that way to prevent the screen froum touching the keyboard.. I still can't somehow fall for that excuse...

  • I buy a ipad from my friend before one year now i just restore it activation is need old iclould account what my friend dinnt know about it how can i activate in a diffrent apple account

    i buy a ipad from my friend before one year now i just restore it activation is need old iclould account what my friend dinnt know about it how can i activate in a diffrent apple account

    You friend has to disable Find My iPad for you, there's nothing you can do.
    https://iforgot.apple.com.

  • C++ libraries I should know about.

    I'm a physics student doing some calculations and learning C++ in the process. Not so long a go, I started using boost library in my applications and found it to be a great help. Is there any other great C++ libraries I should definitely know about (programming in general, BLAS, etc) ?

    For general purpose computing, Boost and STL are the obvious places to start. It's hard to image a frequently occurring programming problem that hasn't been addressed by Boost.
    If you plan on running your codes in a distributed memory environment (e.g. linux cluster), you will probably want to learn the basics of MPI. MPI-2 does define a C++ API, however, if you read the official MPI-2 documentation, you'll find that the entire C++ API has been depreciated. There are some 3rd party libraries out their that provide a more modern C++ interface to MPI (e.g. Boost MPI) than that defined by the MPI standard.
    There are numerous C++ libraries out there for numerical computing (c.f. oonumerics.org). However, I don't think there is a "standard" C++ library for numerical computing. Even BLAS and LAPACK do not define C++ APIs. The Boost uBLAS library is popular, but it only implements BLAS functionality (e.g. uBLAS does not provide the tools needed to solve equations). In the end, you might be better served by first identifying what your specific needs are, focus on learning the fundamentals of modern C++ software design, and then patch together a custom C++ library (or libraries) that meets your specific needs. It's usually a good idea to try to reuse the tools that are already out there; however, sometimes there are pragmatic and/or pedagogical reasons to develop some tools from scratch.
    For my own research (application of Finite Volume Methods to problems in aeroelasticity and flight dynamics), I have my own C++ numerical library that I've developed mostly from scratch, which provides the building blocks I need to compose a "physics" based model ("classical" continuum mechanics) of my problem. The linear algebra library (or sub-library) is just a C++ wrapper around BLAS and LAPACK; I find it difficult to beat the performance of vendor supplied math libraries--even with modern template meta-programming libraries such as uBLAS.
    Also, Fortran 2003 introduced features for binding fortran code with C. Most modern compiler suites (e.g. GCC, Intel, Open64) implement the ISO C Binding features of Fortran 2003. This means that you can now implement C bindings to legacy fortran libraries without having to guess the name mangling scheme used by a particular fortran compiler. Also, C structs and Fortran derived types are now inter-operable. I don't necessarily recommend developing in Fortran, but if you come across a mature library that was written in Fortran, you could potentially create C++ bindings (via C) to that library if need be.
    I hope this helps.

  • Anything I should know about the 990FXA-GD80V2 before purchase?

    I'm considering getting a new motherboard in the next few days or so, and currently have my eye on the 990FXA-GD80V2. I plan to pair it with the FX-8350 and a R7850, and likely use Linux.
    I was wondering if there was anything in-particular I should know before buying this board. For example, I recall reading somewhere that someone had to do a BIOS update to get their FX-8350 to use the right voltage or something.
    I'm doing a good bit of research myself, but I'd like to avoid another experience like the one I had with ASRock.

    Quote from: miklkit on 28-August-14, 22:46:41
    I read your post about Asrock.  Not good, but no 970 board is up for the 8350.  Only experts with mega cash to spend on exotic cooling can keep that combination alive.
      The MSI GD80 is a solid cool running board that can handle any FX cpu.  My 8350 will bench and run at over 5ghz with air cooling but is stable at 4.7 ghz for everyday use.  The 9590 is good for 5ghz for every day use.   Because the GD80 runs so cool air cooling can be used making this a very cost effective combination.  I am currently using an ASUS Sabertooth board and it runs hot.  Water cooling is required to overclock it and I will be going back to the GD80 soon.
      MSI is very conservative with their bios settings, which means that you can only run stock clocks unless the utilities provided on their cd are used.  But last December they castrated those utilities too!  I prefer to use ClickBiosII, and here is a link to a working version.
    https://www.dropbox.com/sh/gpalg0tpyyfcivy/AAA_vvHgq7MUkdcXPH3Nh5rWa/CLICKBIOSII.7z?dl=0
    Thanks for the feedback. I'm curious though, what exactly about the ASUS Sabertooth board makes it run hotter than the GD80? Would figure the CPU temps to be relatively the same, but maybe you're talking about another component like the VRMs or NB?
    Also, what's that ClickBiosII thing? Is it a custom BIOS? From quick-glance at the archive, perhaps it's a BIOS-configuration tool that can be used from Windows to directly-alter the BIOS?
    I put in the order for the GD80V2 a little bit ago; seems it'll be a manufacturer-refurbished board. Does anyone happen to have any first-hand experience as to how warranty is handled with such hardware? From my understanding, it seems used hardware carries whatever warranty existed since the hardware's purchase date, but refurbished hardware carriers only a 90-day warranty. Is the 90-day limit true, and if so, is it a "hard" limit (as in, you get absolutely 0 support after 90 days), or perhaps handled on case-by-case (MSI "might" be kind enough to do the RMA after 90 days depending on issue)?

  • Constant Kernel Panics? Black Screen? Video problems? You should know about this Class Action Suit against Apple for selling defective logic boards.

    CLASS ACTION FILED AGAINST APPLE FOR DEFECTIVE MACBOOK LOGIC BOARDS
    I was one of the many unfortunate individuals who ended up paying Apple the $310 repair fee (more than $400 total, diagnostics included) to fix what was a defective logic board causing constant kernel panics. I don't think the suit has come to a conclusion or a settlement has been reached, but after spending the last month or so on this board trying to diagnose my kernel panics, I figured many others with similar problems would be interested in knowing about this.
    If anyone has any more up-to-date information on the suit or on what individuals should do if they believe they fall into the Plaintiff Class, please share!

    CLASS ACTION FILED AGAINST APPLE FOR DEFECTIVE MACBOOK LOGIC BOARDS
    I was one of the many unfortunate individuals who ended up paying Apple the $310 repair fee (more than $400 total, diagnostics included) to fix what was a defective logic board causing constant kernel panics. I don't think the suit has come to a conclusion or a settlement has been reached, but after spending the last month or so on this board trying to diagnose my kernel panics, I figured many others with similar problems would be interested in knowing about this.
    If anyone has any more up-to-date information on the suit or on what individuals should do if they believe they fall into the Plaintiff Class, please share!

  • What other little things about the iPod I should know about?

    Just a quick question,
    What are some of the features of the iPod Touch that are not 100% obvious?
    Example...
    Yesterday I found out that you could pull up the music controls anywhere(even locked) by pressing the "home" button twice.
    After finding the above by accident, I was thinking about if their was any other little hidden things to find.
    --Daniel L

    1. Watch quicktime supported videos (via safari) (3GP, MPEG-4 etc)
    2. Play podcasts/mp3 (via safari)
    3. When viewing photos tilt the screen and right when the screen is adjusting touch it and the photo will stay like that

  • What you need to know about British Telecom Total ...

    I don't want to waste time on this forum. I've changed to Virgin - and thank God !
    Here's the text of the last letter I wrote to Customer Service Director, BT plc, Correspondence Centre, Durham, DH98 1BT.  I got the briefest of replies, which dealt with none of the points I raised.
    During April I was engaged in searching for a new flat. There were a number of possible candidates. I phoned British Telecom to ask about the service to the flat I favoured
    Your representative told me that I could expect up to 4 Mbs in this are. That was a lie
    The maximum possible speed is 2 Mbs. My IP Profile has typically been 1,2Mbs and currently you have restricted me to .9 Mbs and then to .78 Mbs
    Given that I was at the start of an 18 month contract with you, I would not have moved to this address had I been told – truthfully – by BT that this area is the worst in Cardiff for Internet access.
    In the same call your representative told me that there would be no problem providing a telephone service to 93 B because the line had very recently been in use and just needed to be switched on at the exchange That was a lie.
    Before moving in, I plugged a handset into the BT socket. There was no 'soft dial tone' which confirms that an inactive line is still connected to the exchange. I notified BT three times, but was assured – in a patronising manner – that then line had been tested and would be connected Friday 30th May. It wasn't of course.
    I notified BT by email. No reply on Saturday, nor the following days. On Tuesday an engineer called me. He told me my phone was working. I said it wasn't. He said he would check and ring me back within a half hour. He did not do so.
    It was not until the following Friday, that an engineer called and the problem was resolved. By that time I had spoken to the landlord and discovered that the BT line had not been in use for some time. Previous tenants had taken the Virgin Media telephone service. The flat had been decorated and the condition of the BT cabling inside the house could not be guaranteed.
    You have therefore deprived me of my telephone/internet service during the first week of May. You have not offered any compensation.
    I now raise a matter which may seem marginal, but which speaks volumes for the way British Telecom manage their business. The BT website offers a 'BT Community Forum'. I registered to use it because I wanted to document my experience for the benefit of other customers. The procedure ends with a message saying that an email with a clickable link will be sent, serving to verify the identity of the person registering. No such email was received. I tried again. No result. I notified BT. No reply.
    After perhaps six emails, a young woman phoned me. She wasted half an hour of my time establishing what I had already said in my emails. She said she would pass the matter on to technicians. No response – of course.
    It is evident that BT does not allow customers to register to use the BT community Forum, for fear that you will receive bad publicity. Given the shoddy manner in which you treat your customers, I imagine that bad publicity is inevitable. The Forum is a sham.
    I now come to the main issue – the provision of an Internet service. I wish once more to make it clear that it is not the slowness of this service that is the principal issue – it is the dishonesty of British Telecom personnel.
    I add that I am being advised by an independent expert who is an ex-BT manager with knowledge of the provision of digital services in Cardiff. You will understand that the press are always interested in 'whistle-blowers'
    If I had been honestly advised by BT that the area I was proposing to move to was poorly served by BT for digital services – and if I was experiencing the best speed that the line could offer me – about 1.5 Mbs real download speed – then I would consider myself bound by my contract with you. I would have moved to this area, knowing what performance I could expect.
    However, as I have outlined above, my decision to move here was largely based on a lie told to me by your representative on the phone – that I could expect up to 4Mbs
    In addition, the line speed has now been restricted to .78 Mbs. In my last letter I said that on the first occasion this 24 hour restriction was imposed I had complained and the peak time restriction was lifted. I had 1.2 Mbs off-peak, and .9 Mbs peak speeds.
    This continued to the end of May. On the 2nd June ther 24 hour restriction was reimposed.
    I have received an email from your customer service manager stating that this is simply because of 'long line length'. Very little technical knowledge is required to know that that statement is nonsense.
    My independent advisor tells me that in fact technicians constantly 'tune' speeds in bad areas. Obviously you try to get as many customers under contract as possible by lying to them about the speeds they will receive, then progressively reduce their line performance in order to accommodate other customers
    It is quite simply an outrage that BT should behave in this fashion, and nothing will please me more than having an opportunity to describe all of this in court.
    I estimate that the damages in time and stress you have caused me amount to one thousand pounds. I look forward to receiving your cheque for that amount.

    Hi sonsenfrancais,
    Welcome to the forum.
    I am sorry to hear you've now moved to another provider following some problems with the installation of your line and broadband speeds, if there's anything you'd like us to look into feel free to drop me an email at [email protected] with your BT account details.
    All the best,
    Stephanie
    Stephanie
    BTCare Community Manager
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post. If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • Fatal iPhoto 6 error in Leopard you should know about in Leopard

    If you use an accented character in the tags in preferences, your tag library crashes (try adding another tag to see what happens). Is there any way to get my tag library back? I spent so much time on my precious photo library. They should make an ID3 tag equivalent for photos...

    They should make an ID3 tag equivalent for photos...
    There is, it’s called IPTC. Iphoto 08 has enhanced support for it, and I’ve been unable to crash it using various accented characters: áé and so on.
    Folk also report problems with Sharing in v6 when using accented characters. I would not expect a fix for these issues as v6 has not been developed for more than 2 years.
    Is there any way to get my tag library back?
    Restore from your back up?
    Regards
    TD

  • Anything I should know about using m-audio FW with 24 inch imac 2.16?

    I just bought a used 24 inch 2.16 core2duo imac, my mac mini has a dead firewire port which I need for an audio interface. So this is a replacement/upgrade for it. I'll be using it with an m-audio profire lightbridge, and for 3d graphics.
    Are there any problems and/or quirks about this machine I should be aware of? Specifically with firewire audio interfaces, but anything else I should be looking for?
    Also, is it possible to upgrade the graphics card in these?

    Quote from: nascarmike on 01-October-06, 00:10:10
    I also have the MSI NEO4 sli f. I have been trying to figure out how to get all four DIMM's loaded.Are you saying that by changing to 2t in bios that I can populate all 4 DIMM's at DDR400? If not what would you reccommend as the best configuration for ram with this board?
    It's depend what CPU you've actually unless you need to PLUG and PRAY for it in order to make it run at DDR400 at 1T but that's normally work with 2T.
    Quote from: Kitt on 01-October-06, 12:49:36
    Thanks again... I downloaded all relevant drivers/files from the MSI site, unarchieved them to their own folders and burnt to DVD.
    If I read the manual correctly I am to put each stick of the same kind (ie: Kingston) in the GREEN slots first.  However, I posted the same "Before..." question to a usenet group "a.c.p.m.msi-microstar" and was advised to put the RAM in one GREEN slot and one PURPLE slot, side-by-side.  Which is correct?  Both GREEN first, or 1 in GREEN and 1 in PURPLE.
    Thanks for the info on the memory timing command of 1T and 2T... The Processor is an AMD-64 3800+ Venice revision E, socket 939.  As I understand it, installing 4 double sided DIMMs will only yield 333MHz, however it would be great if the 1T could work to achieve 400MHz.
    --Brian
    Maybe that you've different combinations of the RAM timing and the volt as you've different brand and memory capacity. Tried to get the same model of ram and timing and maybe it could help you to get DDR400 if you're LUCKY enough others mantained it to DDR333 speed. GD luck.

  • What you wanted to know about Time values

    Hello
    I tried to gather in a single document the useful infos about the use of time values in Numbers.
    I must thanks pw1840 which checked the English syntax.
    Here is the text only document.
    Files with the sample tables are available on my iDisk:
    <http://idisk.me.com/koenigyvan-Public?view=web>
    Download:
    ForiWork:ForNumbers:Time in Numbers.zip
    *STORED DATE-TIME VALUES - BASICS*
    Numbers clearly states that it stores date-time values, no less no more. This means these values consist of two parts: a date and a time. It is important to note that both parts are present even if only one of them is displayed.
    When we type the time portion only, it includes the current date even though it may not be displayed.
    But when we apply the formula: =TIME(0,0,ROUND(TIMEVALUE(B)24*6060,0)), we get a date-time value whose numerical value of the date portion is 0. This means, in AppleLand, 1 janvier 1904. Such a date-time allows us to make correct calculations, but there are two true drawbacks:
    1) Since TIMEVALUE() returns a decimal number, when we calculate the corresponding number of seconds we MUST use the ROUND() function. While calculations with decimal numbers give the wanted value, they may not be exact and be off by + or - epsilon. And
    2) The structure of Numbers date-time values is such that the time part is always in the range 0:00:00 thru 23:59:59.
    There is also a detail which seems annoying to some users. The minimal time unit is the second because time values are in fact a pseudo string representing the number of seconds between the date-time and the base date-time, 1 janvier 1904 00:00:00.
    -+-+-+-+-
    *TIMEVALUE() FUNCTION*
    When Numbers Help states that the TIMEVALUE() function "converts a date, a time, or a text string to a decimal fraction of a 24-hour day.", it means that the operand for the function TIMEVALUE() may be something like:
    31/12/1943, 31 décembre 1943, or 31 déc. 1943 described as a date;
    1:12:36 or 1:12 described as time; or
    31/12/1943 23:59:59 described as a text string.
    The date may also be 31/12/43 but here the program must guess the century. According to the rule, this one will be 31/12/2043 (yes, I am very young).
    All of this is not exactly what we are accustomed to but it is perfectly logical as described. My guess is that those who don't understand are simply clinging to old habits and are reluctant to adopt an unfamiliar approach .
    -+-+-+-+-
    *ELAPSED TIME (DURATION)*
    Given a table whose 1st row is a header, I will assume that column B stores starting-time values and column C stores ending-time values. Both do not display the date component of the date-time values. We may get the difference in column D with the formula:
    =IF(OR(ISBLANK(B),ISBLANK(C)),"",TIMEVALUE(C)-TIMEVALUE(B))
    which returns the elapsed time as the decimal part of the day.
    We immediately encounter a problem. If ending-time is the day after the starting-day, the result will be negative. So it would be useful to revise the formula this way:
    =IF(OR(ISBLANK(B),ISBLANK(C)),"",IF(TIMEVALUE(C)>TIMEVALUE(B),0,1)+TIMEVALUE(C) -TIMEVALUE(B))
    But some of us may wish to see results in the traditional format which may be achieved using:
    =IF(OR(ISBLANK(B),ISBLANK(C)),"",TIME(0,0,ROUND((IF(TIMEVALUE(C)>TIMEVALUE(B),0 ,1)+TIMEVALUE(C)-TIMEVALUE(B))24*6060,0)))
    -+-+-+-+-
    *DURATION SUMS > or = 24 HOURS*
    In the examples above, we always assumed that the durations where smaller than 24 hours because Numbers states clearly in the Help and the PDF Users Guide that time values are restricted to the range 00:00:0 to 23:59:59. For longer durations we must fool Numbers.
    First problem: we are adding several time durations. Each duration is in the authorized range and the result is greater than 24 hours.
    As before, starting-time values are in column B, ending-time ones are in column C, and the elapsed time value is in column D. The formula is:
    =IF(OR(ISBLANK(B),ISBLANK(C)),"",IF(TIMEVALUE(C)>TIMEVALUE(B),0,1)+TIMEVALUE(C) -TIMEVALUE(B))
    in column E, the formula for the cumulative elapsed time value is:
    =SUM($D$2:D2)
    in column F, converting to time format, the formula is:
    =TIME(0,0,ROUND(MOD(E,1)24*6060,0))
    in column G, the formula for showing more than 24 hours in the day/hour/minute format is:
    =IF(E<1,"",INT(E)&IF(E<2," day "," days "))&F
    in column H, expressing total elapsed time in total hours using the traditional time format, the formula is:
    =IF(E<1,F,INT(E)*24+LEFT(F,LEN(F)-6)&RIGHT(F,6))
    in column I, expressing total elapsed time in total hours using the traditional time format, an alternate formula is:
    =IF(E<1,F,INT(E)*24+HOUR(F)&":"&RIGHT("00"&MINUTE(F),2)&":"&RIGHT("00"&SECOND(F ),2))
    Of course the user would choose the format used in column G or the one in column I for his table. There is no need to keep all of them. It would be fine to hide column F whose contents are auxiliary.
    Second problem: individual durations may be greater than 23:59:59 hours.
    Again, column B is used to store starting date-time, column C stores ending date-time, and durations are calculated in column D. Since B and C are storing full date-time values, we may use this simple formula to find the duration:
    =C-B
    in column E, the time portion of the duration given in time format is:
    =TIME(0,0,ROUND(MOD(D,1)24*6060,0))
    in column F the formula to show the duration as days/hours/minutes is:
    =IF(D<1,"",INT(D)&IF(D<2," day "," day(s "))&E
    in column G we give the elapsed time in total hours using a time format. The formula is:
    =IF(D<1,E,INT(D)*24+LEFT(E,LEN(E)-6)&RIGHT(E,6))
    in column H we give the elapsed time in total hours using a time format. An alternate formula is:
    =IF(D<1,E,INT(D)*24+HOUR(E)&":"&RIGHT("00"&MINUTE(E),2)&":"&RIGHT("00"&SECOND(E ),2))
    If the duration is greater than 24 hours, the results in columns E and F are not a time value but a string. So the value in column D (which is time duration only) is useful.
    -+-+-+-+-
    *PROBLEM WITH ENTERING TIME*
    When you wish to enter 12:34 but 12 is the number of minutes, remember that Numbers will decipher this as 12 hours and 34 minutes. Simple tip:
    Assuming that your original entry is in column B, then in column C use this formula to align the minutes and seconds for proper Numbers interpretation:
    =IF(ISERROR(TIME(0,LEFT(B,SEARCH(":",B)-1),RIGHT(B,LEN(B)-SEARCH(":",B)))),"",T IME(0,LEFT(B,SEARCH(":",B)-1),RIGHT(B,LEN(B)-SEARCH(":",B))))
    -+-+-+-
    *MISCELLANEOUS NOTES*
    • Of course, the addition of two dates and multiplication or a division applied to one date means nothing and would generate the red triangle announcing a syntax error.
    • We may add a time value to a date-time: If B contains a date-time and C contains a time, the following formula will return the sum of the two values:
    =B+TIMEVALUE(C)
    • We may strip the time value of a full date-time one with the formula: =DATE(YEAR(B),MONTH(B),DAY(B))
    • Just as a reminder,
    =EDATE(B, 3) adds 3 months to the pure date stored in B
    so, of course,
    =EDATE(B, 12) adds one year to the pure date stored in B
    • If B and C store date-time values,
    =C-B returns the difference in decimal days.
    =DATEDIF(B,C,"D") returns the number of days between the two pure dates. It's identical to =DATE(YEAR(C),MONTH(C),DAY(C))-DATE(YEAR(B),MONTH(B),DAY(B))
    =DATEDIF(B,C,"M") returns the number of months between the two pure dates.
    =DATEDIF(B,C,"Y") returns the number of years between the two pure dates.
    Three other variants are available which use the parameters "MD","YM" and "YD".
    Yvan KOENIG (from FRANCE lundi 25 août 2008 15:23:34)

    KOENIG Yvan wrote in his "*STORED DATE-TIME VALUES - BASICS*" section:
    The minimal time unit is the second because time values are in fact a pseudo string representing the number of seconds between the date-time and the base date-time, 1 janvier 1904 00:00:00.
    This is not exactly true. Numbers files store date-time values in a string format consistent with ISO 8601:2004. This format explicitly includes year, month, day, hour, minute, & second values.
    This may be verified by examining the uncompressed index.xml file in a Numbers package. For example, the first day of 1904 is stored as cell-date="1904-01-01T00:00:00+0000" & of the year 0001 as cell-date="0001-01-01T00:00:00+0000." This format is not a numeric value of seconds from a base date-time, often referred to as a "serial time" format, that is used in applications like AppleWorks (or Excel?).
    Note that the time value (all that follows the "T" in the string) actually has four components, the last one (following the plus) representing the time zone offset from UTC time in minutes. AFAIK, Numbers does not set this to anything besides "+0000" but interestingly, it will attempt to interpret it if set by manually editing the file. For example, change cell-date="1904-01-01T00:00:00+0000" to cell-date="1904-01-01T00:00:00+0120" & the cell will display the last day of December of 1903 as the date, but will still show the time as 00:00:00. This suggests a future version of Numbers might be time zone aware, but currently it is unreliable & not fully implemented.
    Anyway, Numbers does not use the first day of 1904 as a reference for stored date-time values, although it will add that date to "dateless" time values imported from AppleWorks spreadsheets. Although I have not verified this, I believe it will also seamlessly translate between ISO & serial time formats as needed for Excel imports & exports, using the first day of 1900 as needed.
    Some other things to note about the ISO standard:
    • It permits fractional time values in the smallest time unit present, so for example "T10:15:30" could be represented as "T10:15.5" but Numbers does not support this -- the cell will appear empty if the index file is manually edited this way.
    • It does not stipulate whether date-time values represent intervals or durations (although it includes an explicit format for intervals between any two date-time values, known as a period). This means a future version of Numbers could support durations without the addition of a new data storage type, but legacy & import/export issues could make this too impractical to implement.
    • It supports a variety of other formats, including date-only, time-only, day-of-year, week-of-year, & various truncations (just hours & minutes, for example). All are unambiguous so a future version of Numbers could support them, but files generated with those versions would not be backwards compatible with the current version.
    For completeness, I will add that instead of using complex formulas to manipulate single-cell date-time values in Numbers, it is sometimes more straightforward to use multiple cells, one for each unit of time (for example, days, hours, minutes, & seconds), plus simple arithmetic formulas to combine them. Since each unit is a simple number, this results in highly portable, accurate, & "future-proof" tables, especially for durations. This is particularly useful for multimedia work in which the units might include video or film frames or audio samples.

  • Any tips that I should know about setting up my system

    My System is a dual 2 ghz PowerPC G5 with 1GB ddr sdram.
    I am running OSX 10.4.9.
    I have an external 160GB LaCie d2 Hard Drive Extreme with FireWire 800
    I will be installing Final Cut Studio HD
    I am using a Canon XL1
    I have a Sony DVMC-DA2 for capturing video from vhs
    I'm new to FCP and I'm not sure how I should set up my system. How should I use my external hard drive to maximize my performance?

    As for the question about your external hard drive...
    If you plan to move your project from home to say school then just be sure to save ALL ASSETS in one place...namely when you are capturing into FCP set your capture scratch to the same place as you project. If you dont and then you move your project your assets will be offline and you will not be able to get them until they are saved in the same place as or your project or you have both your project and your desktop available...

  • Office 2013: 10 Best Features You Should Know About

    The new Office 2013 has some key updates that, while aren’t quite as dramatic as theWindows 8 from Windows 7change, still have some pretty cool features for mobile and desktop.  Office 2013, will ship sometime next year at prices that have not been announced
    yet.  Check out the top 10 features:
    Going Mobile. Microsoft has geared the new software towards a more mobile-friendly audience, allowing users to interact more
    efficiently on mobile and tablets including finger and stylus controls that may help to spur Office’s migration to mobile devices. Another decidedly mobile move is Office Home and Student 2013 RT, which includes Word, Excel, PowerPoint, and OneNote, will come
    with ARM-based Windows 8 devices including Microsoft Surface .
    In The Cloud. Microsoft’s SkyDrive cloud service is being positioned to play a key role in Office users’ daily computing
    lives. Office 2013 will save your documents to SkyDrive by default, enabling you to access files from multiple devices, including a smartphone and tablet. When you sign into Office from another device, your personalized settings and recently used files
    are already there for you. The new Office is available as a cloud-based subscription too. Office 365 is now also available for home-based users as well as businesses. Subscribers will get automatic upgrades, additional SkyDrive storage, multiple installs for
    several users, and added perks such as international calls via Skype.
    Finger and Stylus. Office 2013 embraces touch and pen input. The touch and stylus features are geared towards smartphones and
    tablets, as well as multi-touch laptops. The touch features are the same as users are accustomed to on their smartphones and tablets; swipe a finger across the screen to turn a page, pinch and zoom to read documents, and write with a finger or stylus.
    Metro Style. Office 2013 conforms to Microsoft’s “Metro” look that’s pervasive across the software developers latest mobile
    apps. The Office Ribbon in Word 2013 has a flatter look than its predecessor in Word 2010.
    PDF’s in Word. You can now edit PDF files in Word 2013 (yay!). Simply open a PDF as you would any other document. Word maintains
    the formatting of the file which is fully editable. You can insert pictures and videos from online sites such as YouTube and Facebook as well. And readers can watch video clips from inside your document.
    Excel. Excel offers some useful upgrades including new templates for budgets, calendars, forms, and reports. The new Quick
    Analysis Lens lets you convert data to a chart or table in a couple of steps. Flash Fill recognizes patters in your data and automatically fills cells accordingly. For example: if you want to separate first and last names into separate columns simply begin
    typing the first names in a new column, press Ctrl+E and Excel will copy the first names for you.
    PowerPoint Has Gained Power. PowerPoint 2013 now sports an updated Start screen with a variety of new themes and color schemes.
    The Presenter View now makes it easier to zoom in on a diagram, chart, or other detail that you want to emphasize to the audience. The Navigation Guide lets you switch slides, even move out of sequence, from a grid that you can see but your audience can’t.
    Presentations can be worked on from different PCs by colleagues to create a single presentation. Comments are allowed and presentations are saved online by default to SkyDrive.
    OneNote. OneNote automatically saves your notes to Skydrive, you don’t have to click “Save”, making your brainstorming sessions
    readily available across multiple devices. OneNote 2013 allows you to grab screens and add them to your notebooks.
    Skype Has Been Integrated. You can now integrate Skype contacts with Microsoft’s enterprise-oriented Lync communications platform
    for calling and instant messaging. Office subscribers get 60 minutes of Skype international calls each month.
    Going Social. Office now includes Yammer, a secure and private social network for businesses that Microsoft tentatively acquired.
    Yammer integrates with SharePoint and Microsoft Dynamics, the company’s line of CRM and enterprise resource planning apps. Office 2013′s People Card tool provides detailed information about your contacts, including their status updates from Facebook and LinkedIn.

    Awesome!
    Thanks for sharing the greate experience here. This is a good summary of Office 2013 cool features which is definitely useful for those who want to learn about the new Office.
    Thanks,
    Ethan Hua CHN
    TechNet Community Support

Maybe you are looking for

  • Problem in f-110

    Hi Experts, In automatic payment after selecting proposal tab i selected start immediately and press enter, then its shows payment proposal has not been carried out. this problem in development server and  in quality server where i can able to do tha

  • Inbound IDoc used for updating Sales order status

    I have a requirements to set use standard IDoc to send out the Sales order to a non-SAP software as well as receiving Inbound IDoc to update the status of sales order in SAP. Could someone comment on my questions below: 1. What are the difference bet

  • How do i get a printer to work?

    Hello i am new to mac os i have used windows all my life so im not quite use to mac os yet so iv tried to set up  the printer to work from my macbook pro but i cant get it to work. the printer is on a homenetwork connected to a windows computer which

  • Choose from list in Matrix

    Hey All, Has anyone been able to get the choose from list in a matrix text field working in SBO 2005 patch level 6? We have tried the tech demo and it works for new rows but not for existing rows. Also we have written our own code to set the matrix t

  • Software raid 1 drive(s) failed

    Curious on what to do with a RAID 1 Drive (s) failure? I've searched the community and cannot find similar issue/answers. Configuration: One 1TB OS & App Drive is fine Two 3TB RAID 1 Drives for Video scratch are fine Two 2TB RAID 1 Data Drives are my