Politically Safe Algorithm ???

I have been playing around with javax.crypto and java.security classes and wanted to see a comparison of pros and cons on the available algorithms, when I found [this article|http://www.javamex.com/tutorials/cryptography/ciphers.shtml] . Does anyone know what the author might mean by politically safe algorithm?
The author states about AES:
>
It is the algorithm that most people will end up using unless they have a strong reason to use anything else
>
and then goes on to state:
>
it is a politically safe decision
>
I’m not sure if this is meant because AES is the encryption standard of the NIST and reportedly supported by US government.

cdc123 wrote:
I have been playing around with javax.crypto and java.security classes and wanted to see a comparison of pros and cons on the available algorithms, when I found [this article|http://www.javamex.com/tutorials/cryptography/ciphers.shtml] . Does anyone know what the author might mean by politically safe algorithm?
The author states about AES:
>
It is the algorithm that most people will end up using unless they have a strong reason to use anything else
>
and then goes on to state:
>
it is a politically safe decision
>
I’m not sure if this is meant because AES is the encryption standard of the NIST and reportedly supported by US government.It might mean that people are less likely to claim you chose the wrong algorithm. If you use an algorithm that people are familiar with and is commonly used, you are less likely to be asked to defend your decision than if you choose an algorithm you found on a shady website (even if the shady algorithm is indeed better).
Just a guess, though.

Similar Messages

  • Throwing or handling the exception.

    Hi,
    Which is the good idea.
    *1 .* throwing the exception to the caller method.
    *2 .* handle the exception at the save place.(with the help of try/catch).
    Any pointer will be highly appreciated.
    Regards,
    Alok

    Usually #1, though sometimes you'll wrap it in an exception more appropriate for that layer.
    To handle the exception, you have to truly handle it.You have to provide a correction for whatever went wrong. That might mean retrying or using some default value or calling some other "safe" algorithm. If you can't actually fix it, you have to throw something to let the caller know what went wrong. It's then up to him to determine whether to handle it or throw it to his caller.

  • Image Processing Algorithms - From Matlab to Pixel Bender

    Hello.
    Got a few Image Processing (Mainly Image Enhancement) Algorithms I created in Matlab.
    I would like to make them run on Photoshop, Create a Plug In out of it.
    Would Pixel Bender be the right way doing it?
    The Algorithms mainly use Convolutions and Fourier Domain operations.
    All I need is a simple Preview Window and few Sliders, Dropbox and Buttons.
    I'd appreciate your help.

    pixel vs float - Couldn't figure out what exactly is the difference if there is at all. I assume Pixel always get clipped into [0 1] and float won't until it gets to be shown on the screen as an output?
    There is no difference between them. At one stage of development we had some ideas about the way the pixel type should work that would make it different to float, but the ideas never came to anything and by the time we realized that it was too late to change. It's #1 on my list of "mistakes we made when developing Pixel Bender".
    Regions - Let me see if I get is straight. For the example assuming Gaussian Blur Kernel of Radius 5 (Not the STD, but the radius - a 11x11 Matrix). I should use "needed()" in order to define the support of each pixel output in the input image. I should do it to make sure no one changes those values before the output pixel is calculated.
    Now, In the documentation is goes needed(region outputRegion, imageRef inputIndex). Should I assume that at default the outputRegion is actually the sampled pixel in the input? Now I use outset(outputRegion, float2(x, y)) to enlarge the "Safe Zone". I don't get this float2 number. Let's say it's (4, 3) and the current pixel is (10, 10). Now the safe zone goes 4 pixel to the left, 4 to the right, 3 up and 3 down? I assume it actually creates a rectangle area, right? Back to our example I should set outset(outputRegion, float2(5.0, 5.0)) right?
    Needed is the function the system calls to answer the question "what area of the input do I need in order to calculate a particular area of the output?".
    I should do it to make sure no one changes those values before the output pixel is calculated.
    No, you should do it to make sure the input pixel values needed to compute the output pixel values have been calculated and stored.
    Should I assume that at default the outputRegion is actually the sampled pixel in the input?
    No. When "the system" (i.e. After Effects, PB toolkit or the Photoshop plugin) decides it wants to display a particular area of the output, it will call the needed function with that area in the outputRegion parameter. The job of the needed function is to take whatever output region it is given and work out what input area is required to compute it correctly.
    Let's say it's (4, 3) and the current pixel is (10, 10).
    Don't think in terms of "current pixel" when you're looking at the needed function. The region functions are not called on a per-pixel basis, they are called once at the start of computing the frame, before we do the computation for each pixel.
    Back to our example I should set outset(outputRegion, float2(5.0, 5.0)) right?
    Yes - you're correct. Whatever size the output region is, you require an input region that has an additional border of 5 pixels all round to calculate it correctly.

  • Java.util.Locale not thread-safe !

    In multithreading programming, we know that double-checking idiom is broken. But lots of code, even in sun java core libraries, are written using this idiom, like the class "java.util.Locale".
    I have submitted this bug report just now,
    but I wanted to have your opinion about this.
    Don't you think a complete review of the source code of the core libraries is necessary ?
    java.util.Locale seems not to be thread safe, as I look at the source code.
    The static method getDefault() is not synchronized.
    The code is as follows:
    public static Locale getDefault() {
    // do not synchronize this method - see 4071298
    // it's OK if more than one default locale happens to be created
    if (defaultLocale == null) {
    // ... do something ...
    defaultLocale = new Locale(language, country, variant);
    return defaultLocale;
    This method seems to have been synchronized in the past, but the bug report 4071298 removed the "synchronized" modifier.
    The problem is that for multiprocessor machines, each processor having its own cache, the data in these caches are never synchronized with the main memory.
    The lack of a memory barrier, that is provided normally by the "synchronized" modifier, can make a thread read an incompletely initialized Locale instance referenced by the static private variable "defaultlocale".
    This problem is well explained in http://www.javaworld.com/javaworld/jw-02-2001/jw-0209-double.html and other documents about multithreading.
    I think this method must just be synchronized again.

    Shankar, I understand that this is something books and articles about multithreading don't talk much about, because for marketing reasons, multithreading is supposed to be very simple.
    It absolutely not the case.
    Multithreading IS a most difficult topic.
    First, you must be aware that each processor has its own high-speed cache memory, much faster than the main memory.
    This cache is made of a mixture of registers and L1/L2/L3 caches.
    Suppose we have a program with a shared variable "public static int a = 0;".
    On a multiprocessor system, suppose that a thread TA running on processor P1 assign a value to this variable "a=33;".
    The write is done to the cache of P1, but not in the main memory.
    Now, a second thread TB running on processor P2 reads this variable with "System.out.prinln(a);".
    The value of "a" is retrieved from main memory, and is 0 !
    The value 33 is in the cache of P1, not in main memory where its value is still 0, because the cache of P1 has not been flushed.
    When you are using BufferedOutputStream, you use the "flush()" method to flush the buffer, and the "synch()" method to commit data to disk.
    With memory, it is the same thing.
    The java "synchronized" keyword is not only a streetlight to regulate traffic, it is also a "memory barrier".
    The opening brace "{" of a synchronized block writes the data of the processor cache into the main memory.
    Then, the cache is emptied, so that stale values of other data don't remain here.
    Inside the "synchronized" block, the thread must thus retrieve fresh values from main memory.
    At the closing brace "}", data in the processor cache is written to main memory.
    The word "synchronized" has the same meaning as the "sync()" method of FileDescriptor class, which writes data physically to disk.
    You see, it is really a cache communication problem, and the synchronized blocks allows us to devise a kind of data transfer protocol between main memory and the multiple processor local caches.
    The hardware does not do this memory reconciliation for you. You must do it yourself using "synchronized" block.
    Besides, inside a synchronized block, the processor ( or compiler ) feels free to write data in any order it feels most appropriate.
    It can thus reorder assignments and instruction.
    It is like the elevator algorithm used when you store data into a hard disk.
    Writes are reordered so that they can be retrieved more efficiently by one sweep or the magnetic head.
    This reordering, as well as the arbitrary moment the processor decides to reconciliate parts of its cache to main memory ( if you don't use synchronized ) are the source of the problem.
    A thread TB on processor P2 can retrieve a non-null pointer, and retrieve this object from main memory, where it is not yet initialized.
    It has been initialized in the cache of P1 by TA, though, but TB doen't see it.
    To summarize, use "synchronized" every time you access to shared variables.
    There is no other way to be safe.
    You get the problem, now ?
    ( Note that this problem has strictly nothing to do with the atomicity issue, but most people tend to mix the two topics...
    Besides, as each access to a shared variable must be done inside a synchronized block, the issue of atomicity is not important at all.
    Why would you care about atomicity if you can get a stale value ?
    The only case where atomicity is important is when multiple threads access a single shared variable not in synchronized block. In this case, the variable must be declared volatile, which in theory synchronizes main and cache memory, and make even long and double atomic, but as it is broken in lots of implementation, ... )

  • Multdecl error in algorithm.cc /w SS C++ 5.12

    Howdy,
    I am running into trouble with multiple declaration errors in the libCstd algorithim.cc file while trying to compile a C++ file. I'm hoping somebody has a clue.
    Background - I am trying to port a library that was originally written using GNU C++ on Solaris to Sun Studio C++ on Solaris. To make a long story short, this library needs to be linked with a program that was already written for Sun Studio C++. Since I've been told you can't mix and match GNU and Sun C++ code together, I'm trying to recompile this library using Sun Studio C++. Recompiling the program with GNU C++ is not an option for various technical and political reasons. Recompiling the library is the only option at this time.
    Version - CC: Sun C++ 5.12 SunOS_sparc 2011/11/16
    ~/SolarisStudio12.3-solaris-sparc-bin/solarisstudio12.3/prod/bin/CC -DNDEBUG -KPIC -g -DNO_MEMBER_TEMPLATES -c -I../../include -I../base -I../parser -I../dc -errtags -c DcPeerManager.cpp
    "../parser/DiameterAVP.h", line 99: Warning, wvarhidemem: length hides DiameterAVP::length.
    "../base/ObjectQueue.h", line 91: Warning, incompletew: The type "DcEvent", used in delete, is incomplete.
    "../base/ObjectQueue.h", line 185: Warning, incompletew: The type "DcEvent", used in delete, is incomplete.
    "DcEvent.h", line 107: Warning (Anachronism), assumetemp: Using ObjectQueue as a template without a declaration.
    "/home/dmd/SolarisStudio12.3-solaris-sparc-bin/solarisstudio12.3/prod/include/CC/Cstd/algorithm.cc", line 908: Error, multdecl: Multiple declaration for __stl_threshold.
    "/home/dmd/SolarisStudio12.3-solaris-sparc-bin/solarisstudio12.3/prod/include/CC/Cstd/algorithm.cc", line 1082: Error, multdecl: Multiple declaration for __stl_chunk_size.
    I looked at algorithm.cc and could not find these alleged multiple declarations. The problem must lie elsewhere but I really have no clue what is causing it.
    Does anybody have any ideas of how to troubleshoot this?
    TIA

    There are several things that could be wrong.
    First, check to see if algorithm.cc is explictly included anywhere. If so, change the code so that it only uses a standard include directive:
    #include <algorithm> One possible subtle problem involves the compiler's default behavior in including files having template code. By default, if a template is declared in a header file and is used, the compiler looks for a corresponding .cc (or .cpp, .C, etc) file, and includes it automatically. If the source code was not written with automatic inclusion in mind, you can get duplicate definition errors due to multiple inclusion of the same file.
    You can read more about the "definitions-included" and "definitions-separate" template compilation model in the C++ Users Guide, chapter 5, "Program Organization".
    However, I don't see how this feature could result in multiple declarations from algorithm.cc, unless the project source code tries to be clever about standard headers, such as including a preprocessed standard header. That is, something like this:
    % cat foo.cc
    #include <algorithm>
    % CC -E foo.cc > algo.h  # don't do this!
    % cat myproject.cc
    #include "algo.h" // don't do this
    ...First, check for something odd like the above. If that's not the problem, try adding the option
    -template=no%extdef
    to disable the automatic inclusion of source code files.
    If still no joy, please post a small sample program that results in the same error messages.

  • Waterfox claims to be a 64 bit version of Firefox. Is it? Is it safe?

    I noticed IE provides a 64 bit version. In looking for a FF 64 bit version I found Waterfox. It claims it is a 64 bit version of FF. Is it? Is it safe? Are there plans for a 64 bit version of FF?

    I have used Waterfox under Windows 7 for about the past 4 months with no problems. I have not detected any virus or malware of any kind. I believe it is a good-faith project that does what it says, nothing more.
    The speed seems to be somewhat better. One of the main advantages is that if I have a few hundred tabs open (which I do more often than you'd imagine), the 64-bit browser can allocate the RAM, whereas the 32-bit browser can not. In this condition, the 64-bit build suffers much less system slowdown (for a while), but eventually both will begin to bog down the system when a few dozen tabs are open and the browser is left running for a month or more. Occasional app restarts are still necessary due to severe memory leaks (regardless of build).
    I notice absolutely no problems with any of my addons or plugins, which probably represents everything the average user could ever want.
    There's little or no excuse to spread fear, uncertainty or doubt about using 64-bit Firefox, either compatibility or performance wise. If your plugins or addons are not listed, try it and see. They may work, have bugs, or not work, only one way to find out if it's right for you.
    If you want to see performance differences objectively, without any political bias, do your own tests, with a stop watch, or with software profilers that can give microsecond or nanosecond accuracy of the time it takes to perform certain functions like page renders. I would assume differences of about 5% or less would be fairly negligible.
    Plugins:
    Adobe Flash (64-bit),
    Adobe Shockwave Flash (64-bit),
    Java JDK and JRE (64-bit),
    Microsoft Silverlight,
    VLC Web plugin
    Addons:
    AdBlock Plus,
    Add-on Compatibility Reporter,
    CookieEx Filter,
    Download Helper,
    eQuake Alert (menu bug, always transparent, happens in 32-bit Firefox also),
    FlashGot,
    Forecastfox,
    Form History Control,
    Jökulsárlón Download Manager,
    Live HTTP Headers,
    NoScript,
    Preserve Download Modification Timestamp,
    Session Manager,
    Tab Mix Plus,
    UPromise Turbo Saver,
    User Agent Switcher
    There is a problem with "HP Smart Web Printing" addon, where by default it appears to be disabled with no option to enable it. It's a somewhat useless addon that is installed with the print driver without a choice. I can print just fine without the addon, even without the HP driver, as Windows 7 has a driver for my printer. But the HP driver does give a few extra controls at the OS level.
    Edit 1: Lists of items, one per line, were collapsed to a paragraph, so I added commas.

  • Adobe reader x: signature digest algorithm

    Hi,
    I have created a pdf with Adobe Acrobat, with the signing attribute on and it has an empty signature field.
    If I sign this document using a brand new self-signed windows digital ID Adobe Reader uses as signing algorithm SHA256, if I use (via PKCS11) a certificate contanied in a smartcard, the hash algorithm used is SHA1... for the italian signature law the hash signing alg must be SHA256.
    My settings in Edit - Preferences - Security - Advanced Preferences - Creation is:
         Default signature Signing Format: CAdES Equivalent
    (but it acts the same with PKCS#7 Detached)
    Is there a way to always use SHA256 in signing?

    Could you please try turning off reader X protected mode and check if you continue to see the problem? You could turn off protected mode using following steps:
    1. Launch Reader X and select Edit->Preferences...
    2. Select General from left navigation
    3. Under application startup uncheck "Enable Protected Mode at Startup"
    Note: You should switch "ON" protected mode after verifying the problem. Purpose of Reader X Protected Mode is to keep your system safe from PDF based exploits and keeping protected mode OFF will leave your system vulnerable.

  • My Firefox changed to Babylon Firefox since I downloadedIMG Burner. What is Babylon Firefox and is it safe?

    My Firefox changed to Babylon Firefox since I downloadedIMG Burner. What is Babylon Firefox and is it safe?

    See:
    *https://developer.mozilla.org/en/The_Places_frecency_algorithm
    That is done via a frecency algorithm that gives bonus points to sites that you have visited.<br />
    You can remove visited sites from the history to reset their bonus count.
    * http://kb.mozillazine.org/browser.urlbar.default.behavior
    * http://kb.mozillazine.org/Location_Bar_search
    * https://support.mozilla.com/kb/Location+bar+search

  • FIFO Thread Safe Variables

    Hi,
    I am currently trying to develop a Datalogger Application for a Fibre Optic link running at 250Mb/s. I have the code sorted that logs the data, but now have to try and store the data to disk.
    I need 3 Rx threads to keep up with the incoming data to ensure I dont get datalink errors on the card buffer, and I store the data into a Link List for processing by a Write Thread. To do this, I am trying to create a Thread Safe linked list variable as below :
    typedef struct list List;
    struct list {
     int  length;
     Node  *head;
     Node *current;
    DefineThreadSafeVar(List, SafeList);  
    List *SafeListPtr;  
    Then in the main function :
    InitializeSafeList();
    I then have an initialise Linked List function as follows:
    ViUInt32 LL_Initialise(void)
     SafeListPtr = GetPointerToSafeList();
     SafeListPtr = (List*)malloc(sizeof(List));
     SafeListPtr->length = 0;
     SafeListPtr->head = SafeListPtr->current = NULL;
     ReleasePointerToSafeList();
     return 0;
    This appears to initialise the linked list correctly, setting the head and current pointers to NULL.
    But, when i call the following function :
    ViUInt32 LL_GetLength()
     ViUInt32 length;
     SafeListPtr = GetPointerToSafeList();     <- ******************
     length = SafeListPtr->length;
     ReleasePointerToSafeList();
     return (length);
    They line I have marked causes the head and current pointers to return to Uninitialised and then causes a Run Time error when try to use them.
    If you have followed that.....Any ideas?
    Thanks
    Stuart

    Well, I suppose you can create an intermediate buffer in the receiving function:
    create an array of your List structure
    read a record from the queue: if in the correct order, stream it to disk
    if out of order, fill a record of the array
    when you receive the next record in the correct order:
    put it to disk
    sort array content (e.g. with QuickSort passing an appropriate comparison function) and see if and how many records in the buffer can be put to disk
    You may want to design a different algorithm to decide when to sort so that you don't have too much data in the intermediate buffer: sorting on every record may cause too much overhead in your application and a growing buffer of record left to write to disk.
    Please keep posting: this multi-producer-one-consumer framework is an interesting schema to study! ;-)
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • A Ring Partition Algorithm

    I'm looking for something about RPA and implementation ie. C++

    I'm looking for something about RPA and
    implementation ie. C++http://www.google.com/search?hl=en&lr=&safe=off&q=Ring+Partition+Algorithm+c%2B%2B&btnG=Search

  • Would this sorting algorithm work?

    I'm writing a sorting algorithm for a class that implements the Comparable interface. The class (called Memo) has a field, 'entryDate' of type 'Date', which itself has int fields for day, month, and year. I want to have the class sorted by date, earliest to latest, and then (for entries on the same day) by 'priority' (a field of type int; can be 1, 2 or 3).
    The one I've written seems to work based on a few tests, but I just want to make sure it will work all the time:
    (btw, i have to use the 'getEntryDate()' method because entryDate is a private field in the Memo class' superclass.
    public int compareTo(Memo memo)
    int comparison = getEntryDate().getYear() - memo.getEntryDate().getYear();
    if(comparison != 0) {
    return comparison;
    comparison = getEntryDate().getMonth() - memo.getEntryDate().getMonth();
    if(comparison != 0) {
    return comparison;
    comparison = getEntryDate().getDay() - memo.getEntryDate().getDay();
    if(comparison != 0) {
    return comparison;
    return priority - memo.priority;
    }

    Generally when you simply subtract one int value from another in the compareTo() method, you'll have to take care that you don't run into an overflow situation (because comparing Integer.MIN_VALUE and (Integer.MIN_VALUE-1) would result in unpleasant surprises otherwise). But in your case you can probably safely assume that all your values are well within the safe range for int.
    Also: If you have the chance to modify the Date chance I'd simply make the Date implement Comparable<Date> and delegate that comparison to the Date class.
    Also: are two Memos at the same date with the same priority considered equal? Don't they have any other fields? It's important that objects that are not equals (i.e. a.equals(b) returns false) don't compare to be the same (i.e. a.compareTo(b) should only return 0 iff a.equals(b) returns true). Otherwise you might run into situations where objects are silently dropped because some Collection classes consider them to be equal (SortedSet implementations are tricky here).

  • My RSA Algorithm

    Hi there, hope someone can help me out alittle here.
    I pretty new to encryption, but I'm trying to code my own RSA algorithm in java, and have a few questions about it.
    *I want the encryption level to be 1024bit, so I should aim to get 2 primes at 512bit ?
    *What bit-level of 'e' (relative prime to phi_n) should I aim for? If I aim for a 1024bit 'e', will this increase security ?
    *I have got a java version of the MersenneTwister psuedorandom number generator, as I thought it would be better than the standard java version (new Random()), is it 'safe' to even use this to calculate the 2 random primes and 1 random 'e' using the BigInteger constructor as so :
    new BigInteger(bit-size, primecertainty, new MersenneTwister())
    or do I need to come up with another way of randomly generating these 3 values ? ie:is any random number generator not suitable for secure encryption?
    I was thinking about having a JFrame, and telling the user to move the mouse around while grabbing the mouse coordinates at certain intervals, and using these values in some way to generate the values.
    Thanks for any help you can give me

    I'm a bit of a noob when it comes to this so please correct anything I'm saying wrong.
    To my limited understanding, RSA is a way of cryptography using the public/private (assymetric) key system. I'm just trying to implement it myself. The reason I'm doing this is two-fold.
    Firstly, to my understanding java implementations of encryption are limited to 256bit (is this correct?), I don't want to be restricted to this limit.
    Secondly, this is mainly for self education purposes as I did a short short course of cryptography in general and they teach you the basics of loads of things, and so gain a decent understanding of none.
    Is there a algorithm written in java already out there, that is proven to be "air-tight"?, as you are correct designing my own probably would have weaknesses and as I'm not a mathematician checking it for vulnerabilities wouldn't be ideal for me.
    I basically am looking for a good way to generate 2 large primes, and then randomly generate 'e'.
    At first, my 'e' was the first value it found, incrementing up from 2 until it found one. I'm currently trying to get a reliable method of a randomly generated 'e'.
    Any help in these areas would be appreciated.

  • MSI 280x Gaming 3G Safe Sustained Temperature

    Hi,
      I'm new to overclocking and underclocking but since I have 2 cards that are close together and am running them at 100% usage 24/7 I've been unable to use normal clock speeds and have the card very underclocked (ex, 800,740). My question is how hot can I run these on a 24/7 basis without damaging the cards?  I realize this may be a relative question but these are new cards and I expect a 5 year lifespan.  From other forums I've been optimizing with a maximum sustained temperature of 89C, but I am trying to attain maximum performance. To do that I'll need to know how hot I can let them run, non-stop for 5 years.
    The crossfire bridge is not installed, and I'm using the Z87-GD65 board with an intel i5-4670 (1100 Watt power supply).
    Cards: R9 280X Gaming 3G BF4 s/n 602-V277-38SB1401027047
     

    Quote from: rritoch on 27-February-14, 16:18:40
    Thank you, that is a good answer to my original question. These 280x's though would overheat within minutes, to be exact when I returned them the computer shop tested it and it took almost exactly 2 minutes to reach 90C at manufacturer settings. The 280x is rated at 1000mhz and being able to run them non-stop at 800mhz would be reasonable but I couldn't even run them non-stop at 600mhz without them overheating.  As for their intended purpose of gaming, including video and audio feeds with some voice recognition, object tracking, and object identification would probably be more than enough to create the same load that mining causes. I also intend to test neural networks and genetic algorithms on them, all things that could be included in a gameing engine for the purpose of enhancing difficulty based on player capability. If the cards can't handle a full load, than it's false advertising and the fully-loaded specs should be available to both programmers like myself, and consumers that are being mislead.
    These cards are now in the hands of MSI, so I'll know more when I have normal 280x cards.  With this information, knowing the safe temperatures and expected operating ranges I should be able to utilize these cards, but stop lying to consumers. The specs on the box are clearly their peak performance, and not the performance they can actually operate at.
    Operating temp: 75C-79C
    Core Speed: 800mhz
    The one spec you left out is the memory speed? Can I expect this also needs to be toned down 20%?
    Memory Speed: 1200mhz ???
    memory speed is fine! at 1200MHz (4.8GHz effective as its 1200MHz x4) constantly that the chips are rated at they should function 8-12years without issue unless there defective and Mining also will not push the Chips very hard only the GPU as its mainly just plain data that being crunched (would expect no more then a 512MB load out of 2GB on each card or below) so there unstressed anyway (VRAM is only going to be stressed by games or benchmarks that have huge Visual Data sets of textures designed to cause a heavy visual load to be calculated) so the VRAM frequancy is nothing as the chips are usually rated for the frequancy 24/7 for 10+ years at a typical clock speed so do not worry about that but if your still concerned drop it to 1000MHz (4GHz effective) but it will make no difference really overall!
    Quote from: rritoch on 27-February-14, 16:18:40
    I'm also curios if these actual "safe" operational conditions are available via OpenCL, or if it is going to report 1000mhz. If so than any application written to utilize GPU (such as matlab) is likely  going to burn up the card if the user doesn't know that it needs to be underclocked.
    it will clock the GPU under load to its set High Frequency (dependant on what you set the speed at) under a heavy load as its not the Application doing so its the Cards VBIOS detecting a load and ramping up the Clocks to make the load be calculated faster as that is what people expect in a game to happen (Like I said before 24/7 operation over a long time frame is the realm of Professional cards that have been validated for a heavy work load not what these are which is Consumer Grade items that are many times cheaper and are disposable commodities generally as gamers usually upgrade them quite regular) so underclocking a consumer Grade item like your R9-280x is a good idea as your using them outside of there Design standard that is much lower a load then your throwing at the GPU and applications unless they are designed to alter the VBIOS Frequency limit then it will just use the default setting that is held in the driver table! <--- if your doing 24/7 calculation loading on a card of this level and you do not know to underclock and undervolt them then you are probably not meant to be doing so as you have too low a understanding of what its doing so really people who do that and they kill it are on there own as there doing something they have such little knowledge about that there asking for trouble! (you yourself seem to be fairly Knowledgeable to this which is why i'm not being very harsh with you because you do understand that if its overheating and getting near the limits to back it off to try and stop it breaking!)
    TIP: when you get your new cards back from RMA let then sit under a Gaming load for a while (30-100% for 3-4 hours with 5 mins between each 30 mins max time frame loads "Maybe use a benchmark") to cure the TIM on the card to help its performance long term as you may have Baked the TIM to fast last time (dried it out too much) leading to the High temps by just throwing it into ultra high load mining constantly at it before it had had a chance to bed in and reach its correct thermal conduction rate Ruining it!

  • Is Finalizer guadian idiom safe??

    Is finalizer guardian idiom safe? which is written on 'Effective java programming language.'
    In VM specification, there is no specific guide how GC work or what kind of algorithm it use.
    So it is uncertain which one of finalize() method invoked first.
    To finalizer guardian work correctly, subclass's finalize() method must be invoked first and then finalizer guidian's finalize() method must be invoked.
    But as I said, guidian's finalize() method could invoked first.
    Effective java programming language is best seller. But I have not heard that finalizer is dangerous. What I think is wrong?? If what I think is wrong please tell me why it is correct. If you think I am right, please tell me also..

    This thread has a good discussion. Finalize does have some problems associated with its use.
    http://forum.java.sun.com/thread.jsp?forum=31&thread=273275

  • I tried to install latest OS update. Recd error installing. IT tried to load the upgrade in Safe Mode. Now system will not boot up. IT says I have to do a clean install which loses all data.

    After receiving an error when I try to install the latest update of OS-X, I took my laptop to oour schools IT dept. They tried to run the update in safe mode and then they got an error saying the system could not find the hard drive. Now they want to run a clean install which loses 1000's of docs and pictures. I have Block I exams in med school next Monday and all of my notes and slides are on that hard drive.
    The IT dept took the hard drive out and tried to back it up and they said they other computer could not connect or find my hard drive.
    Major panic....
    I have Norton antivirus and have never found any viruses on the system. I also have a program called ExamSoft which some other students here have noted they had similar crashes. Not sure if that is the issue...

    Several take home messages here:
    always back up before you update.
    always back up anything you don't want to lose. Kind of like flossing. Only floss the teeth you don't want to lose.
    Norton is very, very bad software, and you should get rid of it.
    You don't need antivirus software on your mac. Antivirus software on macs causes more trouble than it is worth.
    Your hard drive may be failing and thus may not be salvageable.
    You could try booting into the recovery partition and seeing if you can repair your hard drive.
    If that doesn't work, then there is a small chance that a third party utility (like Disk Warrior or TechTool) might be able to fix your drive (if it is not failing).
    If that doesn't work, you may be able to send your drive and a lot of cash to a drive recovery service and see if they can get your files off it.
    In the mean time, get notes and slides from a (smart) friend.

Maybe you are looking for