Anything I should know about using m-audio FW with 24 inch imac 2.16?

I just bought a used 24 inch 2.16 core2duo imac, my mac mini has a dead firewire port which I need for an audio interface. So this is a replacement/upgrade for it. I'll be using it with an m-audio profire lightbridge, and for 3d graphics.
Are there any problems and/or quirks about this machine I should be aware of? Specifically with firewire audio interfaces, but anything else I should be looking for?
Also, is it possible to upgrade the graphics card in these?

Quote from: nascarmike on 01-October-06, 00:10:10
I also have the MSI NEO4 sli f. I have been trying to figure out how to get all four DIMM's loaded.Are you saying that by changing to 2t in bios that I can populate all 4 DIMM's at DDR400? If not what would you reccommend as the best configuration for ram with this board?
It's depend what CPU you've actually unless you need to PLUG and PRAY for it in order to make it run at DDR400 at 1T but that's normally work with 2T.
Quote from: Kitt on 01-October-06, 12:49:36
Thanks again... I downloaded all relevant drivers/files from the MSI site, unarchieved them to their own folders and burnt to DVD.
If I read the manual correctly I am to put each stick of the same kind (ie: Kingston) in the GREEN slots first.  However, I posted the same "Before..." question to a usenet group "a.c.p.m.msi-microstar" and was advised to put the RAM in one GREEN slot and one PURPLE slot, side-by-side.  Which is correct?  Both GREEN first, or 1 in GREEN and 1 in PURPLE.
Thanks for the info on the memory timing command of 1T and 2T... The Processor is an AMD-64 3800+ Venice revision E, socket 939.  As I understand it, installing 4 double sided DIMMs will only yield 333MHz, however it would be great if the 1T could work to achieve 400MHz.
--Brian
Maybe that you've different combinations of the RAM timing and the volt as you've different brand and memory capacity. Tried to get the same model of ram and timing and maybe it could help you to get DDR400 if you're LUCKY enough others mantained it to DDR333 speed. GD luck.

Similar Messages

  • Black Magic Intensity Pro....anything I should know about?

    I will be getting the Intensity Pro within a couple days and I was wondering if anyone in here have used it or is using it. Is there anything that I should know about that I may have overlooked? 

    I just noticed that the screen does not sit securely
    when it's closed. There seems to be a 'bow' in the
    whole panel.
    Have you heard about this before?
    This is actually a common issue on powerbooks and macbook pros. Apple suggests not not worry about the "cosmetic defect", as it doesn't affect the performance and it was design that way to prevent the screen froum touching the keyboard.. I still can't somehow fall for that excuse...

  • Anything I should know about the 990FXA-GD80V2 before purchase?

    I'm considering getting a new motherboard in the next few days or so, and currently have my eye on the 990FXA-GD80V2. I plan to pair it with the FX-8350 and a R7850, and likely use Linux.
    I was wondering if there was anything in-particular I should know before buying this board. For example, I recall reading somewhere that someone had to do a BIOS update to get their FX-8350 to use the right voltage or something.
    I'm doing a good bit of research myself, but I'd like to avoid another experience like the one I had with ASRock.

    Quote from: miklkit on 28-August-14, 22:46:41
    I read your post about Asrock.  Not good, but no 970 board is up for the 8350.  Only experts with mega cash to spend on exotic cooling can keep that combination alive.
      The MSI GD80 is a solid cool running board that can handle any FX cpu.  My 8350 will bench and run at over 5ghz with air cooling but is stable at 4.7 ghz for everyday use.  The 9590 is good for 5ghz for every day use.   Because the GD80 runs so cool air cooling can be used making this a very cost effective combination.  I am currently using an ASUS Sabertooth board and it runs hot.  Water cooling is required to overclock it and I will be going back to the GD80 soon.
      MSI is very conservative with their bios settings, which means that you can only run stock clocks unless the utilities provided on their cd are used.  But last December they castrated those utilities too!  I prefer to use ClickBiosII, and here is a link to a working version.
    https://www.dropbox.com/sh/gpalg0tpyyfcivy/AAA_vvHgq7MUkdcXPH3Nh5rWa/CLICKBIOSII.7z?dl=0
    Thanks for the feedback. I'm curious though, what exactly about the ASUS Sabertooth board makes it run hotter than the GD80? Would figure the CPU temps to be relatively the same, but maybe you're talking about another component like the VRMs or NB?
    Also, what's that ClickBiosII thing? Is it a custom BIOS? From quick-glance at the archive, perhaps it's a BIOS-configuration tool that can be used from Windows to directly-alter the BIOS?
    I put in the order for the GD80V2 a little bit ago; seems it'll be a manufacturer-refurbished board. Does anyone happen to have any first-hand experience as to how warranty is handled with such hardware? From my understanding, it seems used hardware carries whatever warranty existed since the hardware's purchase date, but refurbished hardware carriers only a 90-day warranty. Is the 90-day limit true, and if so, is it a "hard" limit (as in, you get absolutely 0 support after 90 days), or perhaps handled on case-by-case (MSI "might" be kind enough to do the RMA after 90 days depending on issue)?

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • 7 Things every Adobe AIR Developer should know about Security

    7 Things every Adobe AIR Developer should know about Security
    1. Your AIR files are really just zip files.
    Don't believe me? Change the .air extension to zip and unzip
    it with your favorite compression program.
    What does this mean for you the developer? What this means is
    that if you thought AIR was a compiled protected format, alas it is
    not.
    2. All your content is easily accessible in the AIR file.
    Since we now that the AIR file is really just a zip file,
    unzip it and see what's inside. If you have added any content
    references when you published the AIR file, voila, there it all is.
    What does this mean for you the developer? Well, you content
    is sitting there ripe for the picking, and so is everything else
    including you Application descriptor file, images etc.
    3. Code signing your Air app does nothing as far as security
    for you.
    All code signing your app does is verify to the end user that
    someone published the app. I does nothing as far as encryption and
    does nothing to project your content.
    What does this mean for you the developer? We'll you should
    still do it, because getting publisher "unknown" is worse. It also
    means that joe hacker would not be able decompile your entire app
    and republish it with the same certificate, unless they
    somehow got a hold of that too.
    4. All your AIR SWF content is easily decompilable.
    Nothing new here, it's always been this way. Type flash
    decompiler into google and you'll find a variety of decompilers for
    under $100 that will take your AIR content swf and expose all your
    source code and content in no time.
    What does this mean for you the developer? All you content,
    code, urls and intellectual property is publicly available to
    anyone with a decompiler, unless you do some extra work and encrypt
    your swf content files, which is not currently a feature of AIR,
    but can be done if you do your homework.
    5. Your SQLite databases are easy to get at.
    SQLite datatbases can be accessed from AIR or any other
    program on you computer that knows how to work with it. Unless you
    put your database in the local encrypted datastore, or encrypt your
    entire database it's pretty easy to get at, especially if you
    create it with a .db extension.
    What does this mean for you the developer? We'll SQLite is
    very useful, but just keep in mind that your data can be viewed and
    altered if you're not careful.
    6. The local encrypted datastore is useful, but....
    The local encrypted datastore is useful, but developers need
    a secure way of getting information into it. Storing usernames,
    passwords and urls in clear text is a bad idea, since as we
    discussed, you code is easy to decompile an read. By putting info
    into the local encrypted datastore, the data is encrypted and very
    difficult to get at. The problem is, how do you get it into there,
    without have to store any info that can be read in the air file and
    without the necessity of communicating with a web server? Even if
    you called a web service and pushed the returned values into the
    datastore, this is not ideal, since you may have encoded the urls
    to you web service into your code, or they intercept the results
    from the web service call.
    What does this mean for you the developer? Use the local
    datastore, and hope that we get some new ways of protecting content
    and data form Adobe in the next release of AIR.
    7. There are some things missing form the current version of
    AIR (1.1) that could really help ease the concerns of people trying
    to develop serious applications with AIR.
    Developers want more alternatives for the protection of local
    content and data. Some of us might want to protect our content and
    intellectual property, remember not all of us are building toys
    with AIR. Other than the local encrypted datastore there are not
    currently any built in options I'm aware of for encrypting other
    content in the AIR file, unless you roll your own.
    What does this mean for you the developer? We'll I've been
    told that Adobe takes security very seriously, so I'm optimistic
    that we'll see some improvements in this area soon. If security is
    a concern for you as much as it is for me, let them know.

    Putting "secret data" as a clear text directly in your code
    is a broken concept in every environment, programing language.
    Every compiled code is reversible, especially strings are really
    easy to extract.
    There is no simple, straightforward way to include secret
    data directly with your app. This is a complicated subject, and if
    you really need to do this, you'll need to read up on it a bit.
    But in most cases this can be avoided or worked around
    without compromising security. One of the best ways is to provide
    the user with a simple "secret key" alongside the app (best way is
    the good old login/password). The user installs the app, and
    provides his "secret key", that goes directly into
    EncryptedLocalStore, and then you use this "secret key" to access
    the "secret data" that's stored on your server. Then you can
    transfer the "secret data" directly into EncryptedLocalStore.
    As for the whole thread:
    Points 1-5 -> Those points do not concern AIR apps only.
    If you are developing an application in any language, you should
    follow those rules, meaning:
    - Code installed on users computer is easy accessible
    - Data stored locally is easy accessible, even if it
    encrypted using any symmetric-key encryption, because the
    encrypting algorithm and encryption key is in your source code (you
    could probably write a book on using public-key encryption so let's
    just leave it for now ;)
    Point 6 -> Is a valid one. All your app security should
    relay on the EncryptedLocalStore. But it is your job to get the
    data securely into the ELS, because there is no point to encrypt
    data that can be intercepted.

  • 2 Ipod minis and 1 XP PC is there anything I should know

    Hi I already have Ipod mini (working fine at moment) my son has just purchased one and I will be connecting to pc shortly is there anything I should know to make like easier?

    I just have joined since my wife and daughter have had two working well with an XP machine--until today. My daugther's is an iPOD mini and my wife's a 20GB iPod. Using method two on the link that was provided for the past three months. But today, it appears as though iTunes will now only recognize my Daughter's Mini and Displays her name--for both the Mini and my Wife's ;-(.
    I'm trying to figure out what might have happened. That was with iTunes 4.8--I installed iTunes 5 now and retried it. Same. Then we reset my Wife's iPod and iTunes still detected her as my Daughter? Seems like there is a data file that is locked in iTunes that is causing the naming display to get confused.
    The Music is OK on both and still partitioned by playlists and Updated that way.
    Any thoughts?

  • I updated my ipad1 thru iTunes from 4.2.1 to 4.3.5 and now having issues, especially playing videos. Anything I should know. I have not jail broken my iPad and don't want to.

    I updated my ipad1 thru iTunes from 4.2.1 to 4.3.5 and now having issues, especially playing videos. Anything I should know. I have not jail broken my iPad and don't want to.

    you update it by connecting it to a computer running itunes and starting itunes up on the computer clicking on it in devices and clicking on the update button in the content menu
    if it say you have the latests then it's because you don't own an iphone 3gs but an iphone 3g where the version 4.2.1 was the last released
    in that case you can't ever update it
    http://en.wikipedia.org/wiki/IOS_version_history#Current_versions

  • Reinstall adobe 9 - anything I should know?

    reinstall adobe 9 - anything I should know?

    Which product you are trying to install Adobe Reader or Acrobat ?
    Ideally it's a good practice to install with a new admin account so that you don't run into permissions issues. Once install, please update to latest version.
    If you need more information, please let me know.

  • HT1620 So what do I need to know about  using my iPad on public networks, say restaurants,etc. Can I use passwords of any kind, or will I be compromised?

    So what do I need to know about using my iPad mini on public networks, say in restaurants. Will I be compromised by using  passwords? What protects my iPad?

    To date, the iPad/iPad mini/iPhone/iPod Touch produces have remained secure.
    In a public place, you do not control the network, so be careful of what activities you perform, such as accessing your bank, or other sensitive places.
    Just surfing and reading your email is generally speaking not a problem.  Hopefully, your email connections are over secure SSL connections.

  • Anyone knows about using java to get data from MS Access database.

    hi there
    anyone knows about using java to get data from MS Access database? thank you

    there is a list of jdbc drivers at:
    http://industry.java.sun.com/products/jdbc/drivers
    they have several ms access drivers listed.
    also, you can use a jdbc-odbc bridge which allows you to use jdbc to connect to any odbc data source:
    http://java.sun.com/j2se/1.3/docs/guide/jdbc/getstart/bridge.doc.html

  • Buying 2nd hand ibook - anything i should know

    I'm buying a second hand g3 iBook 500MHz from a friend of mine, which was bought when the g4 iBooks first came out at the end of 2004.
    The Logic board has been replaced under warranty, but the hard drive is corrupt, meaning i'll have to change it [i've already researched it and am confident i can do it wihout a hitch].
    I have an old 95mm 30GB HDD from a windows laptop which i'll be putting in the iBook.
    Is there anything i should know? The HDD isn't formatted [i figure there's a boot utility when installing OSX that can format the drive]. Is it recommended that I do anything out of the ordinary, like installing OSX, then reformatting and reinstalling it again after the first boot [to condition the drive or anything].
    Any other tips and tricks?
    Thanks

    So, I bought it.
    Turns out it was an iBook 800, not a 500. It also had a 30GB drive instead of the 15GB the seller told me, as well as and extra 128MB RAM [total 256].
    When i first turned it on, it booted into OS X 10.3.9, and ran for a bit, sluggishly, until i tried to run the software updater. Then it died. I had the original 10.2.1 install disks, so i formatted and reinstalled, which worked.
    Everytime i ran the software updater, and instaleld the 10.2.8 combo [either through the updater itself, or by downloading it from the apple support site and running the dmg myself.
    Anyway, i ended up pulling the whole thing apart and replacing the HDD with one from an old laptop [another 30GB] with the help from the guide at macfixit.com. I went out and got Tiger, installed it fresh on the new HDD, and voila - works beautifully.
    So far no problems at all. I'm really, really liking it [my first mac] so far.
    Thanks for help everyone

  • My MacBook Pro Core Duo from 2006 shuts down randomly requiring frequent reboots to the point I had to abandon doing any project or anything on this MacBook Pro. Same thing occurs with the iMac dual core from 2007. MacBook battery replaced.

    My MacBook Pro Core Duo from 2006 shuts down randomly requiring frequent reboots to the point I had to abandon doing any project or anything on this MacBook Pro. Same thing occurs with the iMac dual core from 2007. MacBook battery replaced. Is it internal battery for both, new logic board required, bad hard drive, or bad RAM?
    Help.

    Run an Apple Hardware Test using the original installation disk that has the AHT instructions on it.
    Try SMC and PRAM resets:
    http://support.apple.com/kb/ht3964
    http://support.apple.com/kb/ht1379
    It is possible that you may need a new PRAM battery.
    Ciao.

  • I have used Airport Time Capsule with my iMac for several years. I recently purchased a MacBook Air and when it tried to backup to Time Capsule it couldn't because Time Capsule is full. How can I now use the 3TB Airport Time Capsule to back up both?

    I have used Airport Time Capsule with my iMac for a couple years. I recently purchased a MacBook Air and when it tried to backup to Time Capsule it couldn't because Time Capsule is full. How can I now use the 3TB Airport Time Capsule to back up both the iMac and MacBook Air? I don't mind losing earlier backups. I have excluded some items from backing up, but since the Airport Time Capsule is full, I can't even begin to back up the MacBook Air.

    On your Mac.......
    Open Finder > Applications > Utilities > AirPort Utility
    Click on the Time Capsule icon, then click Edit in the smaller window that appears
    Click on the Disks tab at the top of the window
    Click Erase Disk
    On the next window that appears, select the Quick Erase option, then click Erase
    The operation will only take a few minutes.
    Now you are ready to start new backups of both Macs.  Back up one Mac first, then back up the other.  Do not try to back up both Macs as the same time.

  • Thinking about getting a MacBook, using Boot Camp, anything I should know?

    I'm going into 8th Grade and my parents are thinking about getting a MacBook for me. Now, because my school uses PCs with Windows, I will need to use Boot Camp so I can save things from Microsoft Office on a flash drive and take it back and forth from home to school. Before I do this I need to know if there is anything I should be aware of because I've seen some weird things on sites about Boot Camp like something called Fat32. I also need to know what version of Windows should I use. XP or Vista? I've heard bad things about Vista like how there are back ways in for Hackers, but it's really eye-catching with it's futuristic look. So should I just wait for Windows 7 to come out and replace Vista in the summer or fall of 2010? (Yes, for people who may not know, Microsoft is developing a successor to Vista known as Windows 7. I read about it in my dad's latest issue of Consumer Reports.)

    When you get the computer you can run the Boot Camp Assistant program (Utilities folder) and print out the documentation (it's fairly long.) Read it carefully before proceeding. It should explain what you need to know about installing and using Windows on a Mac.
    You can install any 32-bit version of Vista or XP with Service Pak 2.
    Windows uses two disk formats - FAT32 or NTFS. Vista requires NTFS but XP can use either. OS X can read/write FAT32 formatted drives but NTFS is read-only to OS X. That means that unless the drive is formatted FAT32 you will not be able to transfer files between the Windows volume and the OS X volume. Neither Vista nor XP are any less secure. Both are vulnerable to viruses and malware unless you run anti-virus/malware software to protect the computer.
    There are different ways to run Windows on a Mac. Boot Camp is only one:
    Windows on Intel Macs
    There are presently several alternatives for running Windows on Intel Macs.
    1. Install the Apple Boot Camp software. Purchase Windows XP w/Service Pak 2 or Vista. Follow instructions in the Boot Camp documentation on installation of Boot Camp, creating Driver CD, and installing Windows. Boot Camp enables you to boot the computer into OS X or Windows.
    2. Parallels Desktop for Mac and Windows XP, Vista Business, or Vista Ultimate. Parallels is software virtualization that enables running Windows concurrently with OS X.
    3. VM Fusionand Windows XP, Vista Business, or Vista Ultimate. VM Fusion is software virtualization that enables running Windows concurrently with OS X.
    4. CrossOver which enables running many Windows applications without having to install Windows. The Windows applications can run concurrently with OS X.
    5. VirtualBox is a new Open Source freeware virtual machine such as VM Fusion and Parallels that was developed by Solaris. It is not yet fully developed for the Mac - some features are not yet implemented - but it does work otherwise.
    6. Last is Q. Q is a freeware emulator that is compatible with Intel Macs. It is much slower than the virtualization software, Parallels and VM Fusion.
    Note that VirtualBox, Parallels, and VM Fusion can also run other operating systems such as Linux, Unix, OS/2, Solaris, etc. There are performance differences between dual-boot systems and virtualization. The latter tend to be a little slower (not much) and do not provide the video performance of the dual-boot system.
    See MacTech.com's Virtualization Benchmarking for comparisons of Boot Camp, Parallels, and VM Fusion.

  • C++ libraries I should know about.

    I'm a physics student doing some calculations and learning C++ in the process. Not so long a go, I started using boost library in my applications and found it to be a great help. Is there any other great C++ libraries I should definitely know about (programming in general, BLAS, etc) ?

    For general purpose computing, Boost and STL are the obvious places to start. It's hard to image a frequently occurring programming problem that hasn't been addressed by Boost.
    If you plan on running your codes in a distributed memory environment (e.g. linux cluster), you will probably want to learn the basics of MPI. MPI-2 does define a C++ API, however, if you read the official MPI-2 documentation, you'll find that the entire C++ API has been depreciated. There are some 3rd party libraries out their that provide a more modern C++ interface to MPI (e.g. Boost MPI) than that defined by the MPI standard.
    There are numerous C++ libraries out there for numerical computing (c.f. oonumerics.org). However, I don't think there is a "standard" C++ library for numerical computing. Even BLAS and LAPACK do not define C++ APIs. The Boost uBLAS library is popular, but it only implements BLAS functionality (e.g. uBLAS does not provide the tools needed to solve equations). In the end, you might be better served by first identifying what your specific needs are, focus on learning the fundamentals of modern C++ software design, and then patch together a custom C++ library (or libraries) that meets your specific needs. It's usually a good idea to try to reuse the tools that are already out there; however, sometimes there are pragmatic and/or pedagogical reasons to develop some tools from scratch.
    For my own research (application of Finite Volume Methods to problems in aeroelasticity and flight dynamics), I have my own C++ numerical library that I've developed mostly from scratch, which provides the building blocks I need to compose a "physics" based model ("classical" continuum mechanics) of my problem. The linear algebra library (or sub-library) is just a C++ wrapper around BLAS and LAPACK; I find it difficult to beat the performance of vendor supplied math libraries--even with modern template meta-programming libraries such as uBLAS.
    Also, Fortran 2003 introduced features for binding fortran code with C. Most modern compiler suites (e.g. GCC, Intel, Open64) implement the ISO C Binding features of Fortran 2003. This means that you can now implement C bindings to legacy fortran libraries without having to guess the name mangling scheme used by a particular fortran compiler. Also, C structs and Fortran derived types are now inter-operable. I don't necessarily recommend developing in Fortran, but if you come across a mature library that was written in Fortran, you could potentially create C++ bindings (via C) to that library if need be.
    I hope this helps.

Maybe you are looking for

  • Can I restore events with titles from time machine?

    I need to restore 2 events back into iphoto. I am using time machine as my backup. When I restore the event it imports the photos from that event back into iphoto but puts them into an "untitled event". Is there a way to restore events including the

  • Profit Center was not maintained during order n billing creation.

    Dear, Gurus I'm facing a problem of Profit Center. User created a service material, maintenance contract. They created a Billing Document as it is order related billing. Accounting document was not generated when saving invoice. When I got to the bot

  • Purchased an eBook which doesn't want to open in Adobe Reader

    The book requires Adobe Digital Editions to open but this program is not available on iPad.  When I try to open the PDF in Adobe Reader on iPad it asks me for a password which I don't have. Any advice?

  • PSU for a 100th time

    Ok... well this question has been asked too much times but... I want to change my PSU... ever since i got  Platinum board I am not satisfied... first of all I want to tell everyone who is using realtrek onboard audio...Change IT A.S.A.P. it sucks!!!

  • "Mac OSX has quit unexpectedly"

    I have had my imac desktop for about a year (i have 10.4 Tiger), using it daily with little to no trouble. Twice in the past two weeks, a grey screen comes from the top of my desktop to the bottom, freezing my computer with a message that informs me