Db_upgrade - Berkely DB

Just ran a pacman -Syu when I meant to run pacman -SyU, and received this  message:
upgrading db... done.
ATTENTION DB PACKAGE:
Please consider to run db_upgrade on Berkeley DB databases with a major db version number update.
Looks like I upgraded from 4.3.29-2 to 4.4.20-3.  Does anybody know if I should actually run db_upgrade?  And if so, is there a certain way to do it?  The sleepycat page on db_upgrade is a little confusing, and the part about the upgrade being 'potentially destructive' is a little scary.
http://www.sleepycat.com/docs/utility/db_upgrade.html
FWIW, I had just upgraded another server using pacman -SyU and I did not get this message.  Both servers were last upgraded around April 21, so I guess they were halfway between Noodle and Gimmick, both running kernel 2.6, and both are using bdb for openldap.
Thanks for any help

silvik@morgana:/etc$ file vsftpd_login.db
vsftpd_login.db: Berkeley DB (Hash, version 8, native byte-order)
"file tests each argument in an attempt to classify it.  There are three sets of tests, performed in this order: filesystem tests, magic number tests, and language tests.  The first test that succeeds causes the file type to be printed."
this is what you want:
find / -name '*.db' > temp
file -f temp | grep Berkeley
rm -f temp
umm... interesting... 28 files to upgrade

Similar Messages

  • I want to use Berkely database for my Dissertation

    Hello everyone,
    I am having some doubts regarding Berkely database. Actually, I am using this database for my project. The scenario of the project is, I have developed some business rules depending on the concept of Ripple Down Rules and represented the rules using Conceptual graphs. Now I want to store fire these business rules in to the Berkely database, so that my application can use these business rules from the Berkeley database.
    This is the context of my project. But, I am very new to this Berkely database. I have downloaded the Berkely database from the Oracle website and also installed it. The OS is am using is Windows XP SP2. Can anyone explain me how to store Conceptual graphs in a Berkely database.
    I am really a beginner to Berkely database. i don't even know what to do after installing the Berkely database. Please advise me about this.
    I would be very thankful to you.
    Cheers,
    Phani.

    Hi Phani,
    The simple answer is however you want. Berkeley DB doesn't put any constraints on how you store data in it. Its main purpose is to provide an efficient and scalable key/value storage and retrieval system, upon which you can build your own data storage system.
    More specifically, if you're representing graph data, the simplest way to do so is with the "edge list" paradigm. You could, for example, set up a pair of databases, one to store all your graph nodes (with associated metadata) and one for directed edges. I'm not sure what particular c structures you'll want to use for these, but I'd suggest using record numbers on both the nodes and the edges DBs, unless all your nodes have unique and meaningful names. I'd also allow duplicate records on your edges DB, and have a very simple record structure in it that simply maps from a source node id to a destination node it (the duplicates allow multiple edges from a given node.)
    If you have more specific constraints on your graph and algorithms you'd like to run on it, there are potentially more schemes you could use, such as the nested set representation for trees, and so on.
    Hope this helps,
    Daniel

  • How to have JMF talk to C++ (Berkely) Socket? What is the parameter?

    I am trying to send audio/video through socket from Java to C++ using Sockets. I am using Berkely sockect but not sure what Receive Paramater i should use on Socket's recv() function when the java program is sending DataSource. Does anyone know? thanks!

    Dear andreyvk ,
    I've read your post about how to use a single RTP session for both media reception and trasmission (I'm referring to you RTPSocketAdapter modified version), but, at the moment, I'receive a BIND error.
    I think that your post is an EXCELLENT solution. I'modified AVReceive3 and AVTransmit3 in order to accept all parameters (Local IP & Port, Remote IP & Port).
    Can you please give me a simple scenario so I can understand what the mistake?
    I'use AVTransmit3 and AVReceive3 from different prompts and if I run these 2 classes contemporarely both in 2 different PC (172.17.32.27 and 172.17.32.30) I can transmit the media (vfw://0 for example) using AVTransmit3 but I'can't receive nothing if I run also AVReceive3 in the same PC?
    What's the problem? Furthermore, If I run first AVReceive3 from a MSDOS Prompt and subsequently I run AVTransmit3 from another prompt I see a BIND error (port already in use).
    How can I use your modified RTPSocketAdapter in order to send and receive a single media from the same port (e.g. 7500).
    I've used this scenario PC1: IP 172.17.32.30 Local Port 5000 and PC2:IP 172.17.32.27 LocalPort 10000
    So in the PC1 I run:
    AVTransmit3 vfw://0 <Local IP 172.17.32.30> <5000> <Remote IP 172.17.32.27> <10000>
    AVReceive3 <Local IP 172.17.32.30/5000> <Remote IP 172.17.32.27/10000>
    and in PC2:
    AVTransmit3 vfw://0 <Local IP 172.17.32.27> <10000> <Remote IP 172.17.32.30> <5000>
    AVReceive3 <Local IP 172.17.32.27/10000> <Remote IP 172.17.32.30/5000>
    I'd like to use the same port 5000 (in PC1) and 10000 (in PC2) in order to transmit and receive rtp packets. How can i do that without receive a Bind Error? How can I receive packets (and playing these media if audio &/or video) from the same port used to send stream over the network?
    How can I obtain a RTP Symmetric Transmission/Reception solution?
    Please give me an hint. If you can't post this is my email: [email protected]

  • Berkely DB 4.6.19

    For migration purpose I need the old Berkely DB 4.6.19
    It is quite sad, that a customer is forced to submit a lot of personal and business details to oracle first just to make him believe he can then download the desired product what does not work at all.
    This is just an inconvenient matter of data gathering.
    I can not download any berkely db though I was forced by oracle to setup a new account and I have even logged in.
    Thanks for any assistance.
    DE

    ok.
    seems, that the host
    download.oracle.com
    has no DNS record in oracle's DNS Zone.

  • Db_verify: Suspiciously high nelem Error from Berkely DB

    I have an application which uses Berkely DB ( Version 3.2.9). My application runs on Solaris. Sometime I get following error message
    thrown by the application -
    "db_verify: Suspiciously high nelem of 4294967287 on page 0
    DB_VERIFY_BAD: Database verification failed."
    There is no problem with my database and all records are intact. I came to know that this is a known problem is Berkely DB and there is some patch available for this.
    Can anyone please let me know what patch is available for this problem and where I can get the details of this patch ?
    Regards
    Lalit.

    Hi Bogdan,
    Thanks for your reply.
    I came to know about this problem from Linux discussion from. Here is the link which taks about similar problem in Berkeley DB
    https://www.redhat.com/archives/rpm-list/2002-June/msg00118.html
    It talks about a Berkely DB patch #4491, but I am unable to find any information about this patch.
    As I mentioned, my DB is not corrupt. If I ignore this error, the application work fine and all the records in the DB are intact.
    Regards
    Lalit.
    Hi Lalit,
    I came to know that this is a known
    problem is Berkely DB and there is some patch
    available for this.Where did you come to know that from?
    I think that this corruption can happen if you don't
    close the library properly.
    What you can do is:
    1. Upgrade;
    2. You could salvage the database and re-load it when
    corruption occurs, using the db_dump utility and the
    -r or -R options.
    3. You can transactionally protect your application
    and running recovery in the case of application or
    system failure.
    Regards,
    Bogdan Coman

  • Upgrade Berkely DB Java Edition 3.x to 4.x

    I am using Berkely DB version 3.2.23. I am looking for the steps to upgrade it to the latest verstion 4.x. Are the API's compatible? i.e., does chaning my build to point to the latests jar sufficient or do I have to make more changes. I would like to know if I have to rebuild my database if I upgrade to the latest version.

    The upgrade procedure (if any) is described at the top of the change log for each release.
    --mark                                                                                                                                                                                                   

  • Newbie to Berkely DB

    Hi All,
    I am xtremely new to Berkely DB.I am going thru the documentation of it.
    I was trying the examples of Berkely in my Eclipse.
    I am getting this error.
    ExampleDatabasePut: com.sleepycat.je.log.LogException: (JE 3.2.23) Environment home \tmp\JEDB doesn't exist
    com.sleepycat.je.log.LogException: (JE 3.2.23) Environment home \tmp\JEDB doesn't exist
         at com.sleepycat.je.log.FileManager.<init>(FileManager.java:182)
         at com.sleepycat.je.dbi.EnvironmentImpl.<init>(EnvironmentImpl.java:294)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:102)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:54)
         at com.sleepycat.je.Environment.<init>(Environment.java:100)
         at je.gettingStarted.MyDbEnv.setup(MyDbEnv.java:64)
         at je.gettingStarted.ExampleDatabasePut.run(ExampleDatabasePut.java:65)
         at je.gettingStarted.ExampleDatabasePut.main(ExampleDatabasePut.java:45)
    com.sleepycat.je.log.LogException: (JE 3.2.23) Environment home \tmp\JEDB doesn't exist
         at com.sleepycat.je.log.FileManager.<init>(FileManager.java:182)
         at com.sleepycat.je.dbi.EnvironmentImpl.<init>(EnvironmentImpl.java:294)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:102)
         at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:54)
         at com.sleepycat.je.Environment.<init>(Environment.java:100)
         at je.gettingStarted.MyDbEnv.setup(MyDbEnv.java:64)
         at je.gettingStarted.ExampleDatabasePut.run(ExampleDatabasePut.java:65)
         at je.gettingStarted.ExampleDatabasePut.main(ExampleDatabasePut.java:45)
    I am not sure where i have to create the tmp folder or am i missing something else?
    please help

    Hi,
    Glad to hear that. This forum is for Berkeley DB Core (DB).
    This was a Berkeley DB Java Edition issue (JE), so, for the future the JE forum is here:
    Berkeley DB Java Edition
    Regards,
    Andrei

  • Error While opening a berkely database in java 4.7.25 for a db created in C

    Hi,
    I have created a DB in C. this is the only database in the environment and does not have any secondary db's.
    the flags used are as follows.
    #define DB_FLAGS DB_CREATE
    #define ENV_FLAGS DB_CREATE|DB_INIT_MPOOL|DB_DIRECT_DB
    #define DB_DEF_INDEX DB_BTREE
    cachesize 512M
    pagesize 16K
    minKey in Btree 64
    when i try to open the DB from my java application, i am geeting the following exception
    /home/htsftp/REGNMS/DATA/20081110/MASTER/MARKETDATA_MASTER.db: multiple databases specified but not supported by file
    java.lang.IllegalArgumentException: Invalid argument: /home/xyz/MASTER/MARKETDATA_MASTER.db: multiple databases specified but not supported by file
    at com.sleepycat.db.internal.db_javaJNI.Db_open(Native Method)
    at com.sleepycat.db.internal.Db.open(Db.java:404)
    at com.sleepycat.db.DatabaseConfig.openDatabase(DatabaseConfig.java:1990)
    at com.sleepycat.db.Environment.openDatabase(Environment.java:314)
    the following are the flags used during opening the environment,db, cursor in java.
    <BeanAttributes config-class='com.sleepycat.db.EnvironmentConfig'
    allowCreate='false'
    lockTimeout='1000'
    transactional='false'
    initializeCache='true'
    cacheSize='536870912'
    maxOpenFiles='1'
    threaded='false'
    />
    </EnvironmentConfig>
    <DatabaseConfig name='masterDb' >
    <BeanAttributes config-class='com.sleepycat.db.DatabaseConfig'
    allowCreate='false'
    readOnly='true'
    pageSize='16384'
    btreeMinKey='64'
    />
    </DatabaseConfig>
    <CursorConfig name='masterDb' >
    <BeanAttributes config-class='com.sleepycat.db.CursorConfig'
    readUncommitted='false'
    readCommitted='false'
    />
    </CursorConfig>
    <DbAttributes name='masterDb'>
    <BeanAttributes config-class='com.berkeley.jni.dbaccess.DbAttributes'
    dbFilePath='/home/xyz/MASTER'
    dbFileName='MARKETDATA_MASTER.db'
    envFileName='MASTER'
    envFilePath='/home/xyz/MASTER'
    />
    </DbAttributes>
    Can you pls. let me know what i am missing so that i am getting the following error?
    Thanks,
    BS

    Hello,
    Is it possible that the database is opened with
    different types from each application like
    BTREE in C and HASH in java? If not can
    you post the full code snipet for opening the
    database in each application.
    Thanks,
    Sandra

  • Usage of Berkely DB

    Hi,
    I am working on the architecture phase of the project for one of the client, where the solution should work in both online and offline mode. The technology solution at high level is the following
    Offline     Java Swing libraries Or ADF Java Swing     
    Manager Classes (Java classes)     Java Beans (worker classes)     
    Protocols (JDBC-ODBC)
    Went through the Oracle documentation and trying to understand the fitment of Berkeley JE DB for the offline database. Can you please confirm the following
    1.     Can I use Berkeley DB as the offline database
    2.     I understand, only JVM is required to run the database, no additional software components are required (like RDBMS, App sever etc)
    3.     What is the basic system configuration required for deploy the db
    4.     How do I achieve the synchronization of Database with the online (to upload any new data from online to offline).
    Your replies on this, help me to decided on the usage of Berkeley DB for the solution.

    JE is a general-purpose, transaction-protected, embedded standalone database written in 100% Java (You use JE through a series of Java APIs just like you use java.util.*. JDBC/ODBC don't relate to JE.). It sits in the same process (JVM) as the application program. Any 1.5 or later JVM is fine.
    There is no online or offline since it's an embedded database. Any synchronization between it and another database is the application's business.
    Please refer to JE Installation Notes (http://www.oracle.com/technology/documentation/berkeley-db/je/installation.html) for more details on how to install and use JE as well as examples.
    Best regards,
    Chao Huang

  • ECCN number of Berkely DB package

    Can anybody knows the ECCN number for the BDB package? This is required for export control of BDB software in our product.

    Dear Sir,
    Thank you very much for your replay.
    Here is the below information you have asked about the db what we are using in our product.
    We are using bdb "db-4.5.20" package with AES encryption.
    We are not using commercial license sir.
    Please let us know the ECCN number for this package.
    Thanking you
    Regard's
    Raghu Shankar A.C
    TOSHIBA EMBEDDED SOFTWARE LTD
    Edited by: user9087106 on Feb 3, 2010 8:55 PM

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

  • Berkely XML DB Compiling error in Tourbo C - Need Help

    Hi, i am compiling the below program in turbo C. and getting 26 errorrs like unbale to open file "XMLPORTABILITY.HPP", "XMLCONTAINER.HPP" etc... which are defined in "DBXML.HPP"......i have placed all the said above files in the TC directory and saved the path as well. but still its not able to open those files while compiling hte below code and giving 26 error. Kindly sugsest what should i do. I have
    #include "DbXml.hpp"
    // exception handling omitted for clarity
    int main(void)
    // Open an XmlManager.
    XmlManager myManager;
    ----------------------------------------------------------------------

    Hi,
    I'm surprise that you are using DBXML with Turbo C. :)
    Because Xml*.hpp are in the same directory as DbXml.hpp, I guess the lib didn't be built/install correctly since Turbo C only can find DbXml.hpp.
    So how did you build and install the lib? Did you built the lib with Visual Studio or just install libs with DBXML Windows MSI installer?
    Best regards,
    Rucong Zhao
    Oracle Berkeley DB XML

  • Execption type when Berkely Db is corrupt

    I have a persistent queue application and I want to bypass writing to the dbms if it tells me its corrupt. The question is how can i tell the difference between a corrupt dbms and a generic exception?
    Thanks Larry

    Hi Larry,
    RunRecoveryException: a human being should take a look at the stack trace and take appropriate action. The application can simply close and re-open the environment to run recovery, but the error condition may persist and therefore a human being should take a look.
    Depending on the underlying cause, which you'll see in the stack trace, you may be able to continue using the database by changing your configuration. For example, if the cause is OutOfMemoryError, you can correct the condition by increasing the heap size.
    But if you see DbChecksumException in the stack trace, then you may be experiencing a data corruption because of a file system or disk error. In this case, you may need to dump and reload the data using the DbDump (use the -r or -R options) and DbLoad utilities. Or you may need to restore from a backup. See:
    http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/util/DbDump.html
    http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/util/DbLoad.html
    http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/catastrophicrecovery.html
    Let us know if you have further questions.
    Ron

  • To RAID or not to RAID, that is the question

    People often ask: Should I raid my disks?
    The question is simple, unfortunately the answer is not. So here I'm going to give you another guide to help you decide when a raid array is advantageous and how to go about it. Notice that this guide also applies to SSD's, with the expection of the parts about mechanical failure.
     What is a RAID?
     RAID is the acronym for "Redundant Array of Inexpensive Disks". The concept originated at the University of Berkely in 1987 and was intended to create large storage capacity with smaller disks without the need for very expensive and reliable disks, that were very expensive at that time, often a tenfold of smaller disks. Today prices of hard disks have fallen so much that it often is more attractive to buy a single 1 TB disk than two 500 GB disks. That is the reason that today RAID is often described as "Redundant Array of Independent Disks".
    The idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. Note that 'Spanning' is not in any way comparable to RAID, it is just a way, like inverse partitioning, to extend the base partition to use multiple disks, without changing the method of reading and writing to that extended partition.
     Why use a RAID?
     Now with these lower disks prices today, why would a video editor consider a raid array? There are two reasons:
    1. Redundancy (or security)
    2. Performance
    Notice that it can be a combination of both reasons, it is not an 'either/or' reason.
     Does a video editor need RAID?
    No, if the above two reasons, redundancy and performance are not relevant. Yes if either or both reasons are relevant.
    Re 1. Redundancy
    Every mechanical disk will eventually fail, sometimes on the first day of use, sometimes only after several years of usage. When that happens, all data on that disk are lost and the only solution is to get a new disk and recreate the data from a backup (if you have one) or through tedious and time-consuming work. If that does not bother you and you can spare the time to recreate the data that were lost, then redundancy is not an issue for you. Keep in mind that disk failures often occur at inconvenient moments, on a weekend when the shops are closed and you can't get a replacement disk, or when you have a tight deadline.
    Re 2. Performance
    Opponents of RAID will often say that any modern disk is fast enough for video editing and they are right, but only to a certain extent. As fill rates of disks go up, performance goes down, sometimes by 50%. As the number of disk activities on the disk go up , like accessing (reading or writing) pagefile, media cache, previews, media, project file, output file, performance goes down the drain. The more tracks you have in your project, the more strain is put on your disk. 10 tracks require 10 times the bandwidth of a single track. The more applications you have open, the more your pagefile is used. This is especially apparent on systems with limited memory.
    The following chart shows how fill rates on a single disk will impact performance:
    Remember that I said previously the idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. That means a RAID will not fill up as fast as a single disk and not experience the same performance degradation.
    RAID basics
     Now that we have established the reasons why people may consider RAID, let's have a look at some of the basics.
    Single or Multiple? 
    There are three methods to configure a RAID array: mirroring, striping and parity check. These are called levels and levels are subdivided in single or multiple levels, depending on the method used. A single level RAID0 is striping only and a multiple level RAID15 is a combination of mirroring (1) and parity check (5). Multiple levels are designated by combining two single levels, like a multiple RAID10, which is a combination of single level RAID0 with a single level RAID1.
    Hardware or Software? 
    The difference is quite simple: hardware RAID controllers have their own processor and usually their own cache. Software RAID controllers use the CPU and the RAM on the motherboard. Hardware controllers are faster but also more expensive. For RAID levels without parity check like Raid0, Raid1 and Raid10 software controllers are quite good with a fast PC.
    The common Promise and Highpoint cards are all software controllers that (mis)use the CPU and RAM memory. Real hardware RAID controllers all use their own IOP (I/O Processor) and cache (ever wondered why these hardware controllers are expensive?).
    There are two kinds of software RAID's. One is controlled by the BIOS/drivers (like Promise/Highpoint) and the other is solely OS dependent. The first kind can be booted from, the second one can only be accessed after the OS has started. In performance terms they do not differ significantly.
    For the technically inclined: Cluster size, Block size and Chunk size
     In short: Cluster size applies to the partition and Block or Stripe size applies to the array.
    With a cluster size of 4 KB, data are distributed across the partition in 4 KB parts. Suppose you have a 10 KB file, three full clusters will be occupied: 4 KB - 4 KB - 2 KB. The remaining 2 KB is called slackspace and can not be used by other files. With a block size (stripe) of 64 KB, data are distributed across the array disks in 64 KB parts. Suppose you have a 200 KB file, the first part of 64 KB is located on disk A, the second 64 KB is located on disk B, the third 64 KB is located on disk C and the remaining 8 KB on disk D. Here there is no slackspace, because the block size is subdivided into clusters. When working with audio/video material a large block size is faster than smaller block size. Working with smaller files a smaller block size is preferred.
    Sometimes you have an option to set 'Chunk size', depending on the controller. It is the minimal size of a data request from the controller to a disk in the array and only useful when striping is used. Suppose you have a block size of 16 KB and you want to read a 1 MB file. The controller needs to read 64 times a block of 16 KB. With a chunk size of 32 KB the first two blocks will be read from the first disk, the next two blocks from the next disk, and so on. If the chunk size is 128 KB. the first 8 blocks will be read from the first disk, the next 8 block from the second disk, etcetera. Smaller chunks are advisable with smaller filer, larger chunks are better for larger (audio/video) files.
    RAID Levels
     For a full explanation of various RAID levels, look here: http://www.acnc.com/04_01_00/html
    What are the benefits of each RAID level for video editing and what are the risks and benefits of each level to help you achieve better redundancy and/or better performance? I will try to summarize them below.
    RAID0
     The Band AID of RAID. There is no redundancy! There is a risk of losing all data that is a multiplier of the number of disks in the array. A 2 disk array carries twice the risk over a single disk, a X disk array carries X times the risk of losing it all.
    A RAID0 is perfectly OK for data that you will not worry about if you lose them. Like pagefile, media cache, previews or rendered files. It may be a hassle if you have media files on it, because it requires recapturing, but not the end-of-the-world. It will be disastrous for project files.
    Performance wise a RAID0 is almost X times as fast as a single disk, X being the number of disks in the array.
    RAID1
     The RAID level for the paranoid. It gives no performance gain whatsoever. It gives you redundancy, at the cost of a disk. If you are meticulous about backups and make them all the time, RAID1 may be a better solution, because you can never forget to make a backup, you can restore instantly. Remember backups require a disk as well. This RAID1 level can only be advised for the C drive IMO if you do not have any trust in the reliability of modern-day disks. It is of no use for video editing.
    RAID3
    The RAID level for video editors. There is redundancy! There is only a small performance hit when rebuilding an array after a disk failure due to the dedicated parity disk. There is quite a perfomance gain achieveable, but the drawback is that it requires a hardware controller from Areca. You could do worse, but apart from it being the Rolls-Royce amongst the hardware controllers, it is expensive like the car.
    Performance wise it will achieve around 85% (X-1) on reads and 60% (X-1) on writes over a single disk with X being the number of disks in the array. So with a 6 disk array in RAID3, you get around 0.85x (6-1) = 425% the performance of a single disk on reads and 300% on writes.
    RAID5 & RAID6
     The RAID level for non-video applications with distributed parity. This makes for a somewhat severe hit in performance in case of a disk failure. The double parity in RAID6 makes it ideal for NAS applications.
    The performance gain is slightly lower than with a RAID3. RAID6 requires a dedicated hardware controller, RAID5 can be run on a software controller but the CPU overhead negates to a large extent the performance gain.
    RAID10
     The RAID level for paranoids in a hurry. It delivers the same redundancy as RAID 1, but since it is a multilevel RAID, combined with a RAID0, delivers twice the performance of a single disk at four times the cost, apart from the controller. The main advantage is that you can have two disk failures at the same time without losing data, but what are the chances of that happening?
    RAID30, 50 & 60
     Just striped arrays of RAID 3, 5 or 6 which doubles the speed while keeping redundancy at the same level.
    EXTRAS
     RAID level 0 is striping, RAID level 1 is mirroring and RAID levels 3, 5 & 6 are parity check methods. For parity check methods, dedicated controllers offer the possibility of defining a hot-spare disk. A hot-spare disk is an extra disk that does not belong to the array, but is instantly available to take over from a failed disk in the array. Suppose you have a 6 disk RAID3 array with a single hot-spare disk and assume one disk fails. What happens? The data on the failed disk can be reconstructed in the background, while you keep working with negligeable impact on performance, to the hot-spare. In mere minutes your system is back at the performance level you were before the disk failure. Sometime later you take out the failed drive, replace it for a new drive and define that as the new hot-spare.
    As stated earlier, dedicated hardware controllers use their own IOP and their own cache instead of using the memory on the mobo. The larger the cache on the controller, the better the performance, but the main benefits of cache memory are when handling random R+W activities. For sequential activities, like with video editing it does not pay to use more than 2 GB of cache maximum.
    REDUNDANCY(or security)
    Not using RAID entails the risk of a drive failing and losing all data. The same applies to using RAID0 (or better said AID0), only multiplied by the number of disks in the array.
    RAID1 or 10 overcomes that risk by offering a mirror, an instant backup in case of failure at high cost.
    RAID3, 5 or 6 offers protection for disk failure by reconstructing the lost data in the background (1 disk for RAID3 & 5, 2 disks for RAID6) while continuing your work. This is even enhanced by the use of hot-spares (a double assurance).
    PERFORMANCE
     RAID0 offers the best performance increase over a single disk, followed by RAID3, then RAID5 amd finally RAID6. RAID1 does not offer any performance increase.
    Hardware RAID controllers offer the best performance and the best options (like adjustable block/stripe size and hot-spares), but they are costly.
     SUMMARY
     If you only have 3 or 4 disks in total, forget about RAID. Set them up as individual disks, or the better alternative, get more disks for better redundancy and better performance. What does it cost today to buy an extra disk when compared to the downtime you have when a single disk fails?
    If you have room for at least 4 or more disks, apart from the OS disk, consider a RAID3 if you have an Areca controller, otherwise consider a RAID5.
    If you have even more disks, consider a multilevel array by striping a parity check array to form a RAID30, 50 or 60.
    If you can afford the investment get an Areca controller with battery backup module (BBM) and 2 GB of cache. Avoid as much as possible the use of software raids, especially under Windows if you can.
    RAID, if properly configured will give you added redundancy (or security) to protect you from disk failure while you can continue working and will give you increased performance.
    Look carefully at this chart to see what a properly configured RAID can do to performance and compare it to the earlier single disk chart to see the performance difference, while taking into consideration that you can have one disks (in each array) fail at the same time without data loss:
    Hope this helps in deciding whether RAID is worthwhile for you.
    WARNING: If you have a power outage without a UPS, all bets are off.
    A power outage can destroy the contents of all your disks if you don't have a proper UPS. A BBM may not be sufficient to help in that case.

    Harm,
    thanks for your comment.
    Your understanding  was absolutely right.
    Sorry my mistake its QNAP 639 PRO, populated with 5 1TB, one is empty.
    So for my understanding, in my configuration you suggest NOT to use RAID-0. Im not willing to have more drives in my workstation becouse if my projekts are finished, i archiv on QNAP or archiv on other external drive.
    My only intention is to have as much speed and as much performance as possible during developing a projekt 
    BTW QNAP i also use as media-center in combination with Sony PS3 to run the encoded files.
    For my final understanding:
    C:  i understand
    D: i understand
    E and F: does it mean, when i create a projekt on E, all my captured and project-used MPEG - files should be situated in F?  Or which media in F you mean?
    Following your suggestions in want to rebulid Harms-Best Vista64-Benchmark comp to reach maximum speed and performance. Can i use in general the those hardware components (exept so many HD drives and exept Areca raid controller ) in my drive configuration C to F. Or would you suggest some changings in my situation?

  • Swf game and firefox can not run them only download them

    I download some flash game and , all of them are swf and  I open them by firefox , but with firefox I can not play these games
    what is problem ?

    Ok, I found a "fix". kurt was right:
    kurt wrote:Downgrade to shared-mime-info 1.1-1
    First of all, let's get something good to test.
    http://www.homestarrunner.com/sbemail35.swf
    In the file "/usr/share/mime/packages/freedesktop.org.xml", find the section for "application/vnd.adobe.flash.movie". Delete it all (from "mime-type" to "/mime-type") and replace it with this:
    <mime-type type="application/x-shockwave-flash">
    <comment>Shockwave Flash file</comment>
    <comment xml:lang="ar">ملف Shockwave Flash</comment>
    <comment xml:lang="be@latin">Fajł Shockwave Flash</comment>
    <comment xml:lang="bg">Файл — Shockwave Flash</comment>
    <comment xml:lang="ca">fitxer Shockwave Flash</comment>
    <comment xml:lang="cs">Soubor Shockwave Flash</comment>
    <comment xml:lang="da">Shockwave Flash-fil</comment>
    <comment xml:lang="de">Shockwave-Flash-Datei</comment>
    <comment xml:lang="el">αρχείο Shockwave Flash</comment>
    <comment xml:lang="en_GB">Shockwave Flash file</comment>
    <comment xml:lang="eo">dosiero de Shockwave Flash</comment>
    <comment xml:lang="es">archivo Shockwave Flash</comment>
    <comment xml:lang="eu">Shockwave Flash fitxategia</comment>
    <comment xml:lang="fi">Shockwave Flash -tiedosto</comment>
    <comment xml:lang="fo">Shockwave Flash fíla</comment>
    <comment xml:lang="fr">fichier Shockwave Flash</comment>
    <comment xml:lang="ga">comhad Shockwave Flash</comment>
    <comment xml:lang="gl">ficheiro sockwave Flash</comment>
    <comment xml:lang="he">קובץ של Shockwave Flash</comment>
    <comment xml:lang="hr">Shockwave Flash datoteka</comment>
    <comment xml:lang="hu">Shockwave Flash-fájl</comment>
    <comment xml:lang="id">Berkas Shockwave Flash</comment>
    <comment xml:lang="it">File Shockwave Flash</comment>
    <comment xml:lang="ja">Shockwave Flash ファイル</comment>
    <comment xml:lang="kk">Shockwave Flash файлы</comment>
    <comment xml:lang="ko">Shockwave 플래시 파일</comment>
    <comment xml:lang="lt">Shockwave Flash failas</comment>
    <comment xml:lang="lv">Shockwave Flash datne</comment>
    <comment xml:lang="ms">Fail Shockwave Flash</comment>
    <comment xml:lang="nb">Shockwave Flash-fil</comment>
    <comment xml:lang="nl">Shockwave Flash-bestand</comment>
    <comment xml:lang="nn">Shockwave Flash-fil</comment>
    <comment xml:lang="pl">Plik Shockwave Flash</comment>
    <comment xml:lang="pt">ficheiro Shockwave Flash</comment>
    <comment xml:lang="pt_BR">Arquivo Shockwave Flash</comment>
    <comment xml:lang="ro">Fișier Shockwave Flash</comment>
    <comment xml:lang="ru">файл Shockwave Flash</comment>
    <comment xml:lang="sk">Súbor Shockwave Flash</comment>
    <comment xml:lang="sl">Datoteka Shockwave Flash</comment>
    <comment xml:lang="sq">File Flash Shockwave</comment>
    <comment xml:lang="sr">Шоквејв Флеш датотека</comment>
    <comment xml:lang="sv">Shockwave Flash-fil</comment>
    <comment xml:lang="uk">файл Shockwave Flash</comment>
    <comment xml:lang="vi">Tập tin Flash Shockwave</comment>
    <comment xml:lang="zh_CN">Shockwave Flash 文件</comment>
    <comment xml:lang="zh_TW">Shockwave Flash 檔</comment>
    <alias type="application/futuresplash"/>
    <generic-icon name="video-x-generic"/>
    <magic priority="50">
    <match value="FWS" type="string" offset="0"/>
    <match value="CWS" type="string" offset="0"/>
    </magic>
    <glob pattern="*.swf"/>
    <glob pattern="*.spl"/>
    </mime-type>
    And now Firefox will play local SWF files properly. I'm sure there's a better fix for this. Maybe someone smarter than me can find it.

Maybe you are looking for