Anything I should know about the 990FXA-GD80V2 before purchase?

I'm considering getting a new motherboard in the next few days or so, and currently have my eye on the 990FXA-GD80V2. I plan to pair it with the FX-8350 and a R7850, and likely use Linux.
I was wondering if there was anything in-particular I should know before buying this board. For example, I recall reading somewhere that someone had to do a BIOS update to get their FX-8350 to use the right voltage or something.
I'm doing a good bit of research myself, but I'd like to avoid another experience like the one I had with ASRock.

Quote from: miklkit on 28-August-14, 22:46:41
I read your post about Asrock.  Not good, but no 970 board is up for the 8350.  Only experts with mega cash to spend on exotic cooling can keep that combination alive.
  The MSI GD80 is a solid cool running board that can handle any FX cpu.  My 8350 will bench and run at over 5ghz with air cooling but is stable at 4.7 ghz for everyday use.  The 9590 is good for 5ghz for every day use.   Because the GD80 runs so cool air cooling can be used making this a very cost effective combination.  I am currently using an ASUS Sabertooth board and it runs hot.  Water cooling is required to overclock it and I will be going back to the GD80 soon.
  MSI is very conservative with their bios settings, which means that you can only run stock clocks unless the utilities provided on their cd are used.  But last December they castrated those utilities too!  I prefer to use ClickBiosII, and here is a link to a working version.
https://www.dropbox.com/sh/gpalg0tpyyfcivy/AAA_vvHgq7MUkdcXPH3Nh5rWa/CLICKBIOSII.7z?dl=0
Thanks for the feedback. I'm curious though, what exactly about the ASUS Sabertooth board makes it run hotter than the GD80? Would figure the CPU temps to be relatively the same, but maybe you're talking about another component like the VRMs or NB?
Also, what's that ClickBiosII thing? Is it a custom BIOS? From quick-glance at the archive, perhaps it's a BIOS-configuration tool that can be used from Windows to directly-alter the BIOS?
I put in the order for the GD80V2 a little bit ago; seems it'll be a manufacturer-refurbished board. Does anyone happen to have any first-hand experience as to how warranty is handled with such hardware? From my understanding, it seems used hardware carries whatever warranty existed since the hardware's purchase date, but refurbished hardware carriers only a 90-day warranty. Is the 90-day limit true, and if so, is it a "hard" limit (as in, you get absolutely 0 support after 90 days), or perhaps handled on case-by-case (MSI "might" be kind enough to do the RMA after 90 days depending on issue)?

Similar Messages

  • Black Magic Intensity Pro....anything I should know about?

    I will be getting the Intensity Pro within a couple days and I was wondering if anyone in here have used it or is using it. Is there anything that I should know about that I may have overlooked? 

    I just noticed that the screen does not sit securely
    when it's closed. There seems to be a 'bow' in the
    whole panel.
    Have you heard about this before?
    This is actually a common issue on powerbooks and macbook pros. Apple suggests not not worry about the "cosmetic defect", as it doesn't affect the performance and it was design that way to prevent the screen froum touching the keyboard.. I still can't somehow fall for that excuse...

  • Anything I should know about using m-audio FW with 24 inch imac 2.16?

    I just bought a used 24 inch 2.16 core2duo imac, my mac mini has a dead firewire port which I need for an audio interface. So this is a replacement/upgrade for it. I'll be using it with an m-audio profire lightbridge, and for 3d graphics.
    Are there any problems and/or quirks about this machine I should be aware of? Specifically with firewire audio interfaces, but anything else I should be looking for?
    Also, is it possible to upgrade the graphics card in these?

    Quote from: nascarmike on 01-October-06, 00:10:10
    I also have the MSI NEO4 sli f. I have been trying to figure out how to get all four DIMM's loaded.Are you saying that by changing to 2t in bios that I can populate all 4 DIMM's at DDR400? If not what would you reccommend as the best configuration for ram with this board?
    It's depend what CPU you've actually unless you need to PLUG and PRAY for it in order to make it run at DDR400 at 1T but that's normally work with 2T.
    Quote from: Kitt on 01-October-06, 12:49:36
    Thanks again... I downloaded all relevant drivers/files from the MSI site, unarchieved them to their own folders and burnt to DVD.
    If I read the manual correctly I am to put each stick of the same kind (ie: Kingston) in the GREEN slots first.  However, I posted the same "Before..." question to a usenet group "a.c.p.m.msi-microstar" and was advised to put the RAM in one GREEN slot and one PURPLE slot, side-by-side.  Which is correct?  Both GREEN first, or 1 in GREEN and 1 in PURPLE.
    Thanks for the info on the memory timing command of 1T and 2T... The Processor is an AMD-64 3800+ Venice revision E, socket 939.  As I understand it, installing 4 double sided DIMMs will only yield 333MHz, however it would be great if the 1T could work to achieve 400MHz.
    --Brian
    Maybe that you've different combinations of the RAM timing and the volt as you've different brand and memory capacity. Tried to get the same model of ram and timing and maybe it could help you to get DDR400 if you're LUCKY enough others mantained it to DDR333 speed. GD luck.

  • Buying 2nd hand ibook - anything i should know

    I'm buying a second hand g3 iBook 500MHz from a friend of mine, which was bought when the g4 iBooks first came out at the end of 2004.
    The Logic board has been replaced under warranty, but the hard drive is corrupt, meaning i'll have to change it [i've already researched it and am confident i can do it wihout a hitch].
    I have an old 95mm 30GB HDD from a windows laptop which i'll be putting in the iBook.
    Is there anything i should know? The HDD isn't formatted [i figure there's a boot utility when installing OSX that can format the drive]. Is it recommended that I do anything out of the ordinary, like installing OSX, then reformatting and reinstalling it again after the first boot [to condition the drive or anything].
    Any other tips and tricks?
    Thanks

    So, I bought it.
    Turns out it was an iBook 800, not a 500. It also had a 30GB drive instead of the 15GB the seller told me, as well as and extra 128MB RAM [total 256].
    When i first turned it on, it booted into OS X 10.3.9, and ran for a bit, sluggishly, until i tried to run the software updater. Then it died. I had the original 10.2.1 install disks, so i formatted and reinstalled, which worked.
    Everytime i ran the software updater, and instaleld the 10.2.8 combo [either through the updater itself, or by downloading it from the apple support site and running the dmg myself.
    Anyway, i ended up pulling the whole thing apart and replacing the HDD with one from an old laptop [another 30GB] with the help from the guide at macfixit.com. I went out and got Tiger, installed it fresh on the new HDD, and voila - works beautifully.
    So far no problems at all. I'm really, really liking it [my first mac] so far.
    Thanks for help everyone

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • I need to upgrade memory slots on my Mac mid 2010 up to 4 GB module which is currently 2GB with two different 1GB memory  slots.Is it compatible and would like to know about the cost?

    I need to upgrade memory slots on my Mac mid 2010 up to 4 GB module which is currently 2GB with two different 1GB memory  slots.Is it compatible and would like to know about the cost?

    this sub forum is about running windows on macs maybe you should try
    https://discussions.apple.com/community/notebooks/macbook

  • 7 Things every Adobe AIR Developer should know about Security

    7 Things every Adobe AIR Developer should know about Security
    1. Your AIR files are really just zip files.
    Don't believe me? Change the .air extension to zip and unzip
    it with your favorite compression program.
    What does this mean for you the developer? What this means is
    that if you thought AIR was a compiled protected format, alas it is
    not.
    2. All your content is easily accessible in the AIR file.
    Since we now that the AIR file is really just a zip file,
    unzip it and see what's inside. If you have added any content
    references when you published the AIR file, voila, there it all is.
    What does this mean for you the developer? Well, you content
    is sitting there ripe for the picking, and so is everything else
    including you Application descriptor file, images etc.
    3. Code signing your Air app does nothing as far as security
    for you.
    All code signing your app does is verify to the end user that
    someone published the app. I does nothing as far as encryption and
    does nothing to project your content.
    What does this mean for you the developer? We'll you should
    still do it, because getting publisher "unknown" is worse. It also
    means that joe hacker would not be able decompile your entire app
    and republish it with the same certificate, unless they
    somehow got a hold of that too.
    4. All your AIR SWF content is easily decompilable.
    Nothing new here, it's always been this way. Type flash
    decompiler into google and you'll find a variety of decompilers for
    under $100 that will take your AIR content swf and expose all your
    source code and content in no time.
    What does this mean for you the developer? All you content,
    code, urls and intellectual property is publicly available to
    anyone with a decompiler, unless you do some extra work and encrypt
    your swf content files, which is not currently a feature of AIR,
    but can be done if you do your homework.
    5. Your SQLite databases are easy to get at.
    SQLite datatbases can be accessed from AIR or any other
    program on you computer that knows how to work with it. Unless you
    put your database in the local encrypted datastore, or encrypt your
    entire database it's pretty easy to get at, especially if you
    create it with a .db extension.
    What does this mean for you the developer? We'll SQLite is
    very useful, but just keep in mind that your data can be viewed and
    altered if you're not careful.
    6. The local encrypted datastore is useful, but....
    The local encrypted datastore is useful, but developers need
    a secure way of getting information into it. Storing usernames,
    passwords and urls in clear text is a bad idea, since as we
    discussed, you code is easy to decompile an read. By putting info
    into the local encrypted datastore, the data is encrypted and very
    difficult to get at. The problem is, how do you get it into there,
    without have to store any info that can be read in the air file and
    without the necessity of communicating with a web server? Even if
    you called a web service and pushed the returned values into the
    datastore, this is not ideal, since you may have encoded the urls
    to you web service into your code, or they intercept the results
    from the web service call.
    What does this mean for you the developer? Use the local
    datastore, and hope that we get some new ways of protecting content
    and data form Adobe in the next release of AIR.
    7. There are some things missing form the current version of
    AIR (1.1) that could really help ease the concerns of people trying
    to develop serious applications with AIR.
    Developers want more alternatives for the protection of local
    content and data. Some of us might want to protect our content and
    intellectual property, remember not all of us are building toys
    with AIR. Other than the local encrypted datastore there are not
    currently any built in options I'm aware of for encrypting other
    content in the AIR file, unless you roll your own.
    What does this mean for you the developer? We'll I've been
    told that Adobe takes security very seriously, so I'm optimistic
    that we'll see some improvements in this area soon. If security is
    a concern for you as much as it is for me, let them know.

    Putting "secret data" as a clear text directly in your code
    is a broken concept in every environment, programing language.
    Every compiled code is reversible, especially strings are really
    easy to extract.
    There is no simple, straightforward way to include secret
    data directly with your app. This is a complicated subject, and if
    you really need to do this, you'll need to read up on it a bit.
    But in most cases this can be avoided or worked around
    without compromising security. One of the best ways is to provide
    the user with a simple "secret key" alongside the app (best way is
    the good old login/password). The user installs the app, and
    provides his "secret key", that goes directly into
    EncryptedLocalStore, and then you use this "secret key" to access
    the "secret data" that's stored on your server. Then you can
    transfer the "secret data" directly into EncryptedLocalStore.
    As for the whole thread:
    Points 1-5 -> Those points do not concern AIR apps only.
    If you are developing an application in any language, you should
    follow those rules, meaning:
    - Code installed on users computer is easy accessible
    - Data stored locally is easy accessible, even if it
    encrypted using any symmetric-key encryption, because the
    encrypting algorithm and encryption key is in your source code (you
    could probably write a book on using public-key encryption so let's
    just leave it for now ;)
    Point 6 -> Is a valid one. All your app security should
    relay on the EncryptedLocalStore. But it is your job to get the
    data securely into the ELS, because there is no point to encrypt
    data that can be intercepted.

  • I updated my ipad1 thru iTunes from 4.2.1 to 4.3.5 and now having issues, especially playing videos. Anything I should know. I have not jail broken my iPad and don't want to.

    I updated my ipad1 thru iTunes from 4.2.1 to 4.3.5 and now having issues, especially playing videos. Anything I should know. I have not jail broken my iPad and don't want to.

    you update it by connecting it to a computer running itunes and starting itunes up on the computer clicking on it in devices and clicking on the update button in the content menu
    if it say you have the latests then it's because you don't own an iphone 3gs but an iphone 3g where the version 4.2.1 was the last released
    in that case you can't ever update it
    http://en.wikipedia.org/wiki/IOS_version_history#Current_versions

  • Anyone know about the new SOi 6 update?

    anyone know about the new SOi 6 update?

    i think you should use nor again. and thanks but i figured it out

  • How do I know about the whole tables, sequences, triggers of the specific D

    Hello,
    I can check the whole tables of the db users with select * from tab; BUT can I know the whole META DATA of tables, sequences, triggers, procedures of a specific user?
    Best regards

    Raakh wrote:
    Hello,
    I can check the whole tables of the db users with select * from tab; BUT can I know the whole META DATA of tables, sequences, triggers, procedures of a specific user?
    Oracle doesn't expose the metadata just like that but shows it in the various columns of various views based on the object type, for example like for tables, it would be in the user_tables and so on. If you are interested in knowing about the metadata of a specific object, you should use dbms_metadata package.
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_metada.htm#ARPLS641
    HTH
    Aman....

  • I would like to know about the subtitles of movies...

    Hi!
    I'm from Greece an i would like to know about the subtitles of movies.
    Specially, how i can know that the movie i would like to buy or rent has greek subtitles?
    Thank you in advance

    I don't think there's a lot of technical information in those specs.  And most of these questions aren't straight forward.
    Easiest first, the color temp is around 5500-6000K, a little cooler than sunlight and will vary depending on power.
    The flash duration is a bit more vague and depends on when you draw the line (consider it to be off).  It can change the specs significantly.  The full power spec from Canon puts it at about 1/800 for the 580exII, but anyone who has tried to freeze motion at full power will tell you that's false.  I find it to be more like 1/200.  You can losely extrapolate the lower power times as flashes simply double duration to double power.
    Actually, I found this website that has some info on the 580II durations, which should get you in the ballpark for the 600.
    http://speedlights.net/2011/04/18/canon-speedlite-580ex-ii-flash-review/
    And David Hobby had a great article on it at Strobist:
    http://strobist.blogspot.com/2010/06/rise-and-fall-of-machines-understanding.html
    There's even more technical discussions you can dig up if interested.

  • I would like to know about the iphone 5 unlocked

    I would like to know about the iphone 5 unlocked
    Dose it support lte 1800 ghz gsm
    Coz i am moving to kuwait and their lte provider is for gsm 1800
    Thank u

    You should be fine based on this...
    http://www.apple.com/iphone/LTE/

  • 2 Ipod minis and 1 XP PC is there anything I should know

    Hi I already have Ipod mini (working fine at moment) my son has just purchased one and I will be connecting to pc shortly is there anything I should know to make like easier?

    I just have joined since my wife and daughter have had two working well with an XP machine--until today. My daugther's is an iPOD mini and my wife's a 20GB iPod. Using method two on the link that was provided for the past three months. But today, it appears as though iTunes will now only recognize my Daughter's Mini and Displays her name--for both the Mini and my Wife's ;-(.
    I'm trying to figure out what might have happened. That was with iTunes 4.8--I installed iTunes 5 now and retried it. Same. Then we reset my Wife's iPod and iTunes still detected her as my Daughter? Seems like there is a data file that is locked in iTunes that is causing the naming display to get confused.
    The Music is OK on both and still partitioned by playlists and Updated that way.
    Any thoughts?

  • How do we let Apple know about the Apple ID Issue?

    Hi, there seems to be an issue (Apple's end)  with signing in to the Itunes store and the Mac App store. How do we let them know about the issue? Their service status shows all green lights so it seems they are not aware of the problem.... I don't know about you but I'm getting really frustrated.. !! PS It seems that my warranty has expired so I don't have direct access to their support ?! HEEEEEEEEELP! THAAAAAANX!!!!!
    Milan, Italy, March 11

    Hi valelorandi,
    I understand that you are seeing an issue with your connection to the iTunes Store and Mac App Store. I have an article for you that will help you address this issue, and it can be found below:
    Can't connect to the iTunes Store - Apple Support
    https://support.apple.com/en-us/HT201400
    Thanks for using the Apple Support Communities. Have a good one!
    -Braden

  • Hi, Dear. I purcahse my iphone 4S on of the guy, my problem is when i update any app from App store and i click update there is an e-mail address is coming which i doesnt know about the password, how can i revome the e-mail address for update our apps.

    Hi, Dear. I purcahse my iphone 4S on of the guy, my problem is when i update any app from App store and i click update there is an e-mail address is coming which i doesnt know about the password, how can i revome the e-mail address for update our apps.

    Yes. Delete the Apps that were not Purchased using Your Apple ID.
    But a Restore as New is the way to go.

Maybe you are looking for

  • IPad won't sync all my contacts from address book?

    I've sync'd my iPad a few times in an attempt to get all my contact from the Address book, but it will only sync 35 f them and not the 130+ I have in address book. Any ideas on that one? Message was edited by: boxer570

  • Error Occurred While Processing Request

    Hi I am getting this message when attempting to open my website. Error Occurred While Processing Request Cannot find CFML template for custom tag header. ColdFusion attempted looking in the tree of installed custom tags but did not find a custom tag

  • QM Master Data templates & Initial study templates

    Hi Experts, Iam in the new project in QM Implementaion, Please send the Master data templates, QM presentation & Initial study templates to my Email id: [email protected]   [email protected] Thanking you in advance Selvam.s

  • Using PHP scripts to authenticate - "unique_name_from_itunes" ???

    We are attempting to use the PHP scripts (version 1.1) designed by Aaron Axelsen of University of Wisconsin - Whitewater (http://omega1.uww.edu/itunesu) to help facilitate authentication to iTunes U. We're bouncing credentials off our AD via PHP and

  • Installed flash won't run on Mac OS 10.5.8?

    Flash Player 10.3 has stopped working on Safari on our Mac OS x 10.5.8; I have installed/uninstalled it several times but we still get the same message. I have researched this forum and thought I had found success on a question posed Sep 26, 2011, ho