3 Terabytes of data?

I'm working with a small firm to put together a 3 TB system.  Any recommendations?
I've been thinking about getting server case that can hold as many drives as possible (they don't have to be hot swappable).  Any suggestions for a good case that is capable of holding lots of drives.
I was then thinking of putting enough 500GB SATA drives in a RAID 5 configuration running Arch/Samba.  The files are primarly going to be CD/DVDs that will be accessed a few times a day to move the files from the server to a production burner.  So I was thinking just about any processor (P4 or better) would work.  And then I've been reading that two network connection on a GB network (which is what they are running on), would work best.
Any thoughts.  This size of hardware is new to me and there are lots of options out there...
Thanks,
Chris....

We are going to build 1.5TB storage array with Intel RAID controller SRCS28X (SATA II-300 õ 6, RAID 0,1,5,10,50, 128MB, PCI-X (PCI-X is extended PCI,  not PCI Express)) and 6 300GB Seagate Barracudas 7200.9 (SATA 2.5 + NCQ + 16MB buffer). Maybe we will use XFS as filesystem for this RAID5 array.

Similar Messages

  • Oracle Vs Ms Sql Server to handle around 6-7 Terabytes of data

    I am considering Oracle 11G to handle around 6-7 Terabytes of data. This data will get accumulated over a period of 8 Months. Any data older then 8 months is purged out of the database.
    Some of my MS SQL DBAs are saying SQL server offers similar performance at a very low cost. Is this really true? On What basis or parameters I can compare Oracle with MS SQL Server to nullify the price advantage?
    I am not looking for ease of development, I am purely looking in terms of handling the huge data and still be able to provide good performance. Apart from performance if there is any parameter which makes any difference then please highlight.

    As with any database requirements tend to drive the best technology to utilize.
    I have run Oracle and MS SQL Server as well as MySQL and PostgreSQL.
    If you are looking to support a fairly large user based on a database that is 6-7 TB is size where a good portiion of the data is subject to access then there is really is no comparison. Oracle does a much better job of handling large user bases with large data sets then MS SQL Server or MySQL at least based on my experience with them.
    MySQL works very well for average user bases and smaller databases where the amount of data accessed is small. In other words you can have 6 TB of data and if you only frequently access 5GB of it then MySQL will work pretty well. If a large portion of the data is accessed all the time by many users you will struggle with MySQL. Most databases in MySQL I ran had issues at user bases of a few hundred with a database size of around 100G-200G, now my main expertise is in Oracle so there may be ways in MySQL to address this, but not easily as normal research and experiment channels did not yeild good results for me.
    I have had a MS SQL Server databases with thousands of users fall on it face with a database only 20GB in size, its was completely CPU bound with 8 CPUs, then moved the database to Oracle with minor code changes to accommidate the differences between Oracle and SQL server using the same hardware specs and CPU was a steady 50% for the same work load and basically the same application.
    I have found the capabilities of Oracle for large data sets and large users provide advantages that the other databases just did not offer or did not do well enough to overcome. Does not mean that Oracle is perfect, or the other databases are bad, but there are a lot of options and capabilities in oracle the I can take advantage of not to mention if I need to I can run on high end Unix hardware and goto 64CPU dozens of I/O channels, etc. I can not do that with MS SQL Server as Windows is my only option and the intel platform just does not hit the capabilties of higher end Unix Hardware.
    When deciding the database to use cost should be the last factor used. If the database will not do what you need it to do then low cost does not help you. Deteremine the needs and requirements and then examine the database features and capabilities to determine what might work best for your situation. If it is a critical business function and you need the caddie then your business will pay for the caddie, it if needs a malibu then pay for the malibu or if it needs the yugo then pay for the yugo. Implement the right technologies where you need them. However there is something of an advantage to limit the number of technologies you will need to support as a jack of all trades never becomes an expert and that in of itself can be costly when trying to overcome issues developed by limitations of a database technology.

  • Any effective storage recommendation(s) for 2 terabytes of data?

    I am responsible for data storage and backup for a mid-size ad agency. We are currently utilizing two external terabyte drives (storage capacity totalling two terabytes).
    These drives are burning out regularly which is leading me to believe it is time to begin researching better options for storage and backup. (They tend to die on us just outside of warranty!)
    There are 6 people in our art department and our files are backed up nightly. We are running on all the lastest and greatest Apple machines, software and operating systems.
    Can anyone recommend a better, more efficient and possibly even a more cost-effective solution?
    We have done some basic research into off-site storage, which I thought would be our best solution, but this has proven to be more costly than we had expected due to the amount we are storing.
    Can anyone assist or make a helpful recommendation based on their own experience with storage problems?
    Macbook Pro   Mac OS X (10.4.8)  

    Welcome to Apple Discussions!
    If everyone is Using Mac OS X Tiger, I'd post this question in:
    Using Mac OS X Tiger
    If there is a mixture of Mac OS X versions being used, but Tiger is among them, then I'd still post there.
    If there is a mixture of Mac OS X and Windows, you might find the Windows Compatibility forum works better for such a question.
    Regardless, I'll say this much. I like the cases and drives by http://www.macsales.com/ http://www.granitedigital.com/ (Relax Technologies), and internal SATA and IDE drives by Western Digital and Seagate.
    In some cases having an http://www.apple.com/xserveraid/
    works better for some people than just using standard external drives. I'm not sure which those are, but the fact there is redundancy I'm sure helps a lot. There is a separate forum just for XServe RAID, which I'm sure if you posted in, people can tell you when it is wise to have one or not.
    Try only to use the Discussions Forum Feedback forum for suggestions on how to improve Discussions, or if you want to alert the moderators about some posts you don't like in the forum.

  • Putting a massive amount of data into a single variable

    I'm running a command to find the amount of space .xar files (it's a Citrix temp file of sort) are using on my organization's main file server. The server itself presents quite a few terabytes of data and all our user's profile folders are there, so we're
    talking about many thousands of folders with a whole mess of files. The command I was using is the following:
    (gci w: -r -force -include *.xar | measure -sum -property Length).Sum
    It took a long time to run, at least 5 hours, but it was finished when I got back to my desk this morning. Turns out we have 63 gigabytes of those .xar files.
    I have a few questions:
    1) Is this the best way to get that info?
    2) If I need to put that command into a variable {$size = (gci w: -r -force ... Length).Sum} to get other info, am I going to kill my poor server's memory?
    Thank you
    [email protected]

    I may need to correlate size across department as sorted by user, or get total number of instances.
    [email protected]
    Is that department / user information all contained in the path to the file, or do you need to do something like query the Owner from NTFS security on the file?
    Here's a quick C# function to search for files and output the full path and size of each to a CSV file.  If this still isn't fast enough, maybe speed can be improved more by using the Win32 API's FindFirstFile / FileNextFile / FindClose functions,
    but I'm not certain how much of a benefit that would give.
    Note:  This code requires at least .NET Framework 4.0, which means PowerShell 3.0.
    Add-Type -TypeDefinition @'
    using System;
    using System.IO;
    using System.Text;
    using System.Collections.Generic;
    public static class FileSearcher
    public static void DumpFileInfoToCsv(string directory, string filter, bool recurse, string csvFile)
    SearchOption searchOption = recurse ? SearchOption.AllDirectories : SearchOption.TopDirectoryOnly;
    if (directory == null)
    throw new ArgumentNullException("directory");
    if (csvFile == null)
    throw new ArgumentNullException("csvFile");
    DirectoryInfo dirInfo = new DirectoryInfo(directory);
    if (!dirInfo.Exists)
    throw new DirectoryNotFoundException(directory);
    using (StreamWriter csv = new StreamWriter(csvFile, false, Encoding.UTF8))
    csv.WriteLine("\"Path\",\"Size\"");
    if (string.IsNullOrEmpty(filter))
    filter = "*";
    foreach (FileInfo fileInfo in dirInfo.EnumerateFiles(filter, searchOption))
    csv.WriteLine("\"{0}\",\"{1}\"", fileInfo.FullName, fileInfo.Length);
    [FileSearcher]::DumpFileInfoToCsv('W:\', '*.xar', $true, "$home\XarFiles.csv")
    You could then examine the CSV file at leisure to group and total up sizes however you like. (Remember to convert the CSV's "Size" column from a string to a numeric type first.)

  • WDMyCloud (NAS) not mapping - 16tb of data not accessible via Windows!

    I have a 16TB NAS which was showing up just fine in 8.1.  10 will not allow access to the NAS via Explorer at all... It sees it but then tells me I don't have access to
    the network resource.  I see lots of posts about this and no solution...  THIS IS A DEAL BREAKER FOR ME. 
    Many terabytes of data to access on-demand and the only way I have to get to it in Windows is by using Western Digital’s horrid app????  
    PLEASE HELP.

    Hi,
    If this issue only happens in Windows 10 Technical Preview, then I would suggest you temporarily turn off firewall and AV installed in this system, cause they might block the connection. Have you tried to manually connect to the NAC via IP or UNC path?
    Please make sure enable netbios over tcp/ip from Network and Sharing Center, find the network connection\properties\Internet Protocol version 4 (TCP/IPv4)\Advanced\WINS
    Meanwhile, please update your drive to latest available firmware to ensure the optimum compatibility.
    It's also recommended to contact the WD support
    http://support.wdc.com/contact/index.asp?lang=en
    NOTE
    This
    response contains a reference to a third party World Wide Web site. Microsoft is providing this information as a convenience to you.
    Microsoft
    does not control these sites and has not tested any software or information found on these sites.
    Yolanda Zhu
    TechNet Community Support

  • Suggested methods for full backup of XServe RAID data

    I know this is only peripherally related to the discussion topic, but since every other suggestion posted here is followed by the disclaimer that you should make a full backup of your data before proceeding with any major operations on your RAID arrays, I'd like to know what more experienced admins do in order to create a full backup for reasonably fast recovery in case of substantial data loss during maintenance/repair.
    Our current "backup" availability is incremental optical disc archival (our data is mostly "write-once"), but this isn't entirely practical for recovery since it's over a terabyte of data. Since the connected server has a free hot-swappable SCSI drive bay as well as an interface for external SCSI devices, not to mention the fiber channel and ethernet interfaces, the options that I'd consider in order would be:
    1. A handful of 150-500 Gb SCSI hard drives, rotated out of the hot-swappable bay
    2. An external tape drive attached to the SCI interface (with appropriate tape size, maybe the LTO-2 with 200Gb native capacity?)
    3. Some other external SCSI storage device
    4. Larger optical disc archival (I hear there are technologies arriving in the near future)
    5. Network-based option; remote seems impractical due to sheer size, but perhaps local?
    The idea is to make a full backup (long-term solutions are superior of course) of 1-2 Tb of data on the XServe RAID before attempting major surgery. Suggestions for common, accepted, tested, efficient methods for accomplishing this would be greatly appreciated. I apologize if this thread isn't on-topic enough for some of you.
    -Brian

    Brian,
    Tape IMO is kinda yucky (to steal a term from your average 3 year old). It's fairly slow to back up to, it's very slow to restore, and it's actually not that reliable by itself (I worked with a large enterprise customer who said their backups were successful about 70% of the time (!!!)).
    That said, tape has the advantage that you can offsite it and archive it very cheaply, and the media are fairly cheap, so you can make lots of backups, so if one fails, you probably can restore the data from another tape.
    Disks are more expensive initially, but end up being pretty reliable, and you get a lot more flexibility (plus, they're fast).
    An emerging "best of both worlds" backup strategy is what's called disk to disk to tape, where you typically back up to another large "disk," for example a second Xserve RAID. Data is then backed up from the second disk to tape, which is taken offsite... thus tape is used for what it's best at (offsite archival). Restore can be from disk in most cases, which is 10-20x faster than restoring from tape. People use software packages like Netvault's Bakbone or Atempo's Time Navigator, which can handle the whole process, and it works quite well. The backup disks (e.g. the RAID) can be onsite, or can be at a backup site a couple KM away, attached via optical (this is preferable, for DR reasons).
    For cases where a second Xserve RAID is prohibitively expensive, cheaper (and slower) RAID 5 enclosures like Wiebetech's RAIDtech can provide a large (say, 1.6 TB) RAID 5 volume, accessible over FW800 or SATA (not sure if they have a SATA-based one yet).

  • Missing Creation Date Causing Problems

    Where to begin. Well, we've moved several terabytes of data to a new Xserve RAID. Somewhere in the process of setting up permissions, etc. we've lost some creation dates of files, mostly .NEF files as far as I can tell. The problem comes in with Bridge CS3. When you click on a folder with these files, you get a double popup window with "The Operation Could Not Be Completed". Once you click through the popup for every file in the folder, everything seems to be peachy. Anyway to fix this so the users don't get the error message?
    All Mac users using Bridge 2.1.0.100 some PowerPC G5's, Some Intel MacPro's. Lots and Lots of RAM in each.

    Afp has a lot of problems. On our network using an Xserv we were forced to use a single authentication because of this exact issue. security wise it's pretty crap but the servers isn't accesable off our physical subnet so was acceptable. Unfortunatly we are now using a mixed envrionment (PC/MAC) and the server will shortly be moved to a centeral location so a mover ot Active directory authentication will be needed.
    To get around these problems we're using a Samba implimentation called DAVE (the licencing is pretty reasonable). It allows full AD authentication/Workgroups etc and fixes a lot of issues with Apples SMB. On our shared storage system for video editing (Edit Share) which also uses DAVE because Final cut Pro and Quicktime (both apple produces) cant reliably write Quicktime files across a network using Apples AFP, but is 100% reliable with DAVE.
    Glad to see some of these issues being sorted
    BRETT

  • Z640 about to order, any dates for shipping with Windows 10 ?

    As I heard rumours that Win 10 may be shipping circa 24th -26th July , i thought I might delay the order so it arrives ( built to order) with Windows 10
    A) anyvody know when the new OS will start shipping?
    B) would experienced users sugggest asking for a down grade to Win7 Pro - or is Windows 10 have a good pre launch rep ( unlike Windows 8 did ).
    When formulating your advice please note Imam crap with computers and the only thing I want to do is add more discs later . No interest in RAID etc. - just want it simple.
    I was going to order with 2 x 16 GB - appreciating that 4 x 8 would be better, but dont want to have to discard RAM later if 32 GB is not enough.
    Will 32 GB be enough - interested in any users real world experience please.....

    I knew you had the knowledge to move the data; but I've seen a lot of folks get into trouble doing it.
    And most have an inflated idea of how much data their systems write.
    What kind of write load do you actually have?
    Virtually all drives provide SMART monitoring of how much data has been written, as well as projected lifespan, and information on expected program/erase cycles is available online. 
    The Samsung 850 Pro, for example, should last 6,000 program/erase cycles.  Being conservative and assuming a really bad 3:1 write amplification (which is an overestimate), imagine being able to write the entire SSD and erase it 2,000 times.
    For a single 512 GB SSD, that would be 1 terabyte of write load per day for almost 3 years.  Remember, you can read it as many times as you like; NAND only wears on erase cycles, which happen during re-writes.
    Most people do not write anywhere near 1 terabyte of data per day.  I use my system heavily and in my case (including it being an SVN server) and I don't even reach 20% of that.  I'm spreading the write load across 4 drives with the RAID
    array so the load on any one device is divided further. 
    But you don't have to guess at what you're writing - what does your SMART data say?
    See this article for some real-world tests/results re: endurance...
    http://techreport.com/review/26523/the-ssd-endurance-experiment-casualties-on-the-way-to-a-petabyte
    With larger SSDs the numbers generally work out so well that people shouldn't even give wearout a second thought.  And everyone should do backups to mitigate potential data loss regardless.
    -Noel
    Detailed how-to in my eBooks:  
    Configure The Windows 7 "To Work" Options
    Configure The Windows 8 "To Work" Options

  • Loading 1 TB of Data in MSSQL Server

    Hi guys,
    Please I need a little assistance from you. I am trying to load over 1TB data from an SQL table (SQL Server 2008) to another table in another server (MSQL Server 2012) using SSIS.
    I started this load 5 days ago and so far I have not been able to load up to 10% of the data.
    Please do you any faster approach i can use to load the data in a shortest possible time.
    Thanks
    me

    Have you verified that network latency isn't an issue and the problem is at the destination server? (try directly copying a large file from one server to the other)
    As others have mentioned above. Drop all indexes except the clustered index
    enable trace flag 610 at the destination server if possible! (this enables minimally logged inserts into populated b-trees <- this is probably the main reason why your load is so slow)
    use a query to select the data even if you're selecting all the rows. (this will help keep the buffer the smallest size required)
    add an order by the clustered key to the select query
    use the oledb destination task
    choose "table or view - fast load" as the data access mode
    untick check constraints.
    make sure table lock is selected.
    whether you select keep identity and keep nulls is up to you - you haven't told us enough about your data to be able to advise either way.
    leave the rows per batch as blank.
    If you're able to enable TF 610 on the destination server then leave the maximum insert commit size at the default. If you're unable to do this then set it to 0 (be aware that this'll prevent you from being able to monitor the progress at the database side.
    eg using sp_spaceused 'tablename')
    the only thing left is to add a custom hint to the oledb destination task. Select it and look at the properties window. Find the "FASTLOADOPTIONS" property. It should say TABLOCK if you've got keep identity and keep nulls disabled.
    add the following to it ,ORDER(your destination clustered key which should be identical to the ORDER BY statement in your source query)
    the purpose of this is you're telling the destination server that you're giving it the data in the same order as the clustered key, so it can pump it direct into the index. It doesn't need to do a sort at the end to re-order it.
    so if your table has a clustered key on TranID,TranDate,CustID your FASTLOADOPTIONS should look like
    TABLOCK,ORDER(TranID,TranDate,CustID)
    In my opinion (and many years experience doing very many data migrations and data warehouse ETLs that move many terabytes of data) this is the fastest and easiest way to move data using ssis from one sql server to another.
    I've blogged about minimally logged inserts using SSIS before here:
    http://jakubka.blogspot.com.au/2014/06/ssis-and-minimally-logged-inserts.html
    edit:
    two more hints -
    Set the packet size of your connections to 32767
    execute the package from one of the servers, not from a workstation (on the destination server will probably be faster - might be able to use the sql destination connector in that scenario). the data flows through the machine that's executing the package
    Jakub @ Adelaide, Australia Blog

  • Dtermine number of processor on 2 terabytes database

    I want to select the oracle server for 10g that will store 2 - 2.5 terabytes of data. Does any body knows how to calculate the number of processors for 64 bit machine for above configuration.
    Thanks

    It depends heavily on what sort of processing you are going to be doing-- data volume is only one piece of the puzzle. If you have 100 users connecting to do ad-hoc queries, you will need a lot more CPU's than you would if you have a few reports generated in a batch every evening. It also depends on your platform, different CPU's have vastly different performance characteristics.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Update2

    Some fixed in the optional update. Hope this helps some with the issues. Again the update went smooth and clean for me.
    Issues that this update fixes
    This update package fixes the issues that are documented in the following Microsoft Knowledge Base (KB) articles.
    KB list
    2979880
          (http://support.microsoft.com/kb/2979880/            )    
    Blank icon in the Jump List after the application is updated in Windows 8.1 or Windows Server 2012 R2
    2976996
          (http://support.microsoft.com/kb/2976996/            )    
    Expired certificates cannot be removed when automatic certificate rollover is disabled in Windows Server 2012 R2
    2976344
          (http://support.microsoft.com/kb/2976344/            )    
    Update to add pronunciation Pinyin strings when you input Chinese characters in Windows RT 8.1 or Windows 8.1
    2975620
          (http://support.microsoft.com/kb/2975620/            )    
    Private bytes keep increasing when you encrypt data on SQL Server on Window 8.1 or Windows Server 2012 R2
    2975080
          (http://support.microsoft.com/kb/2975080/            )    
    Computer opens a blank browser window when RMS client sends WIF request to a Windows Server 2012 R2-based AD FS server
    2975078
          (http://support.microsoft.com/kb/2975078/            )    
    Error occurs when you access exclusion policies in an AD RMS server that is running Windows Server 2012 R2
    2975066
          (http://support.microsoft.com/kb/2975066/            )    
    You cannot sign in to a web application when you use certificate authentication method in Windows Server 2012 R2
    2974308
          (http://support.microsoft.com/kb/2974308/            )    
    Windows Server Essentials integration with Office 365 or Windows Azure Active Directory is blocked
    2973055
          (http://support.microsoft.com/kb/2973055/            )    
    Error 58 when an application calls BackupRead function to back up files that are shared by using SMB in Windows
    2972257
          (http://support.microsoft.com/kb/2972257/            )    
    Windows Server 2012 R2-based cluster freezes on multiple nodes when you add, rename, or remove disks
    2972254
          (http://support.microsoft.com/kb/2972254/            )    
    Hyper-V virtual machines cannot be connected to sometimes when TCP connections reconnect in Windows Server 2012 R2
    2972251
          (http://support.microsoft.com/kb/2972251/            )    
    Event 1002 when system file information folder is shared in Windows 8.1 or Windows Server 2012 R2
    2972112
          (http://support.microsoft.com/kb/2972112/            )    
    Applications lose local data when you deploy non-English Windows RT 8.1 or Windows 8.1 by using Configuration Manager
    2971171
          (http://support.microsoft.com/kb/2971171/            )    
    ADFS authentication issue for Active Directory users when extranet lockout is enabled
    2969039
          (http://support.microsoft.com/kb/2969039/            )    
    0x7E Stop error occurs randomly when you close SMB sessions in Windows 8.1 or Windows Server 2012 R2
    2967456
          (http://support.microsoft.com/kb/2967456/            )    
    You cannot sort the group lists by principal name in ADUC in Windows
    2966087
          (http://support.microsoft.com/kb/2966087/            )    
    You intermittently cannot connect to the DirectAccess server by using the IP-HTTPS adapter in Windows 8.1 and Windows Server 2012 R2
    2965904
          (http://support.microsoft.com/kb/2965904/            )    
    Task name is deleted after you link a storage report to a scheduled task in Windows
    2964835
          (http://support.microsoft.com/kb/2964835/            )    
    Too few tiles are displayed during image customization in Windows
    2962774
          (http://support.microsoft.com/kb/2962774/            )    
    "An extended error occured" when you add a group Managed Service Account in Windows
    2962142
          (http://support.microsoft.com/kb/2962142/            )    
    Can't disable security settings in Group Policy Preferences for Internet Explorer 10
    2959144
          (http://support.microsoft.com/kb/2959144/            )    
    USB device remove and install repeatedly when it is connected to a Windows 8.1 or Windows Server 2012 R2-based computer
    2957984
          (http://support.microsoft.com/kb/2957984/            )    
    The rdgiskdcproxy:i:1 property cannot be set for the remote apps that are hosted by RD Web Access on Windows Server
    2956344
          (http://support.microsoft.com/kb/2956344/            )    
    Clients randomly lose connection to a Windows Server 2012 R2 or Windows Server 2008 R2 SP1 Network Policy Server
    2954031
          (http://support.microsoft.com/kb/2954031/            )    
    The Group Policy Management Console does not generate a status report for a domain
    2953997
          (http://support.microsoft.com/kb/2953997/            )    
    Windows MultiPoint Server occasionally crashes when you start the WMS shell in Windows Server 2012 R2 or Windows Server 2012
    2953972
          (http://support.microsoft.com/kb/2953972/            )    
    Clicking "Find more" under mobile broadband causes Searching status to freeze in Windows 8.1 and Windows RT 8.1
    2936943
          (http://support.microsoft.com/kb/2936943/            )    
    Registry.pol is corrupted after an abnormal termination during the writing process in Windows Server
    2934797
          (http://support.microsoft.com/kb/2934797/            )    
    USB device appears with a yellow exclamation mark in Windows Device Manager in Windows 8.1 or Windows Server 2012 R2
    2866693
          (http://support.microsoft.com/kb/2866693/            )    
    Wired or Wireless Network GPO setting is not displayed in GPMC report in Windows
    2990795
          (http://support.microsoft.com/kb/2990795/            )    
    "Failed to fetch servers" when you try to find available IP addresses in IP Address Management in Windows Server 2012 R2
    2985822
          (http://support.microsoft.com/kb/2985822/            )    
    SystemSettings.exe process crashes when you add a VPN connection on a Windows RT 8.1 or Windows 8.1-based computer
    2984374
          (http://support.microsoft.com/kb/2984374/            )    
    Connect to incorrect entry point when Windows 8.1 or Windows Server 2012 R2-based computer wakes from sleep or hibernate
    2983590
          (http://support.microsoft.com/kb/2983590/            )    
    File download freezes and CPU usage high when WFP callout driver is installed in Windows 8.1 or Windows Server 2012 R2
    2983477
          (http://support.microsoft.com/kb/2983477/            )    
    Account picture shifts to right side after you unlock a computer from an application in Windows 8.1 or Windows RT 8.1
    2983139
          (http://support.microsoft.com/kb/2983139/            )    
    "0x0000009F" Stop error occurs on a Windows 8.1 or Windows Server 2012 R2-based computer
    2982498
          (http://support.microsoft.com/kb/2982498/            )    
    Touch input does not work in Windows Journal after Windows 8.1 wakes from sleep mode while the program is running
    2980433
          (http://support.microsoft.com/kb/2980433/            )    
    Properties of an item opens randomly when you double-click the item in Windows 8.1 or Windows Server 2012 R2
    2980415
          (http://support.microsoft.com/kb/2980415/            )    
    Playback stops when you open a file that the Xbox Music application is playing in Windows RT 8.1 or Windows 8.1
    2979923
          (http://support.microsoft.com/kb/2979923/            )    
    "Processing error occurred" when you detect status of Active Directory in Windows Server 2012 R2-based domain controller
    2979877
          (http://support.microsoft.com/kb/2979877/            )    
    Folder redirection stops working when multiple users log on to a Windows 8.1 or Windows Server 2012 R2-based computer
    2979097
          (http://support.microsoft.com/kb/2979097/            )    
    Computer does not wake when you disconnect AC power from the Windows 8.1-based or Windows Server 2012 R2-based computer
    2979070
          (http://support.microsoft.com/kb/2979070/            )    
    Update to improve battery life during low-power audio playback in Windows 8.1 and Windows RT 8.1
    2979053
          (http://support.microsoft.com/kb/2979053/            )    
    "Set as metered connection" setting for Wi-Fi and MBB connections remains in Off state in Windows RT 8.1 or Windows 8.1
    2979052
          (http://support.microsoft.com/kb/2979052/            )    
    "App can't set print settings" error or DBCS characters cannot be printed in Windows 8.1 or Windows RT 8.1
    2979051
          (http://support.microsoft.com/kb/2979051/            )    
    Windows Store app still reacts to search text when you delete the text in Search charm in Windows 8.1 and Windows RT 8.1
    2979050
          (http://support.microsoft.com/kb/2979050/            )    
    "Insert a SIM" text is still shown in VAN UI after you insert a SIM card in Windows 8.1 or Windows RT 8.1
    2978391
          (http://support.microsoft.com/kb/2978391/            )    
    Virtual machines perform slowly or crash in Windows 8.1 and Windows Server 2012 R2
    2978368
          (http://support.microsoft.com/kb/2978368/            )    
    0x9F Stop error occurs randomly when a computer enters hibernation from InstantGo state in Windows 8.1 or Windows RT 8.1
    2978367
          (http://support.microsoft.com/kb/2978367/            )    
    Remote Desktop session freezes when you run an application in the session in Windows 8.1 or Windows Server 2012 R2
    2978104
          (http://support.microsoft.com/kb/2978104/            )    
    File or folder is not found or an error occurs when you configure SMB share properties in Windows RT 8.1 or Windows 8.1
    2978102
          (http://support.microsoft.com/kb/2978102/            )    
    Cluster nodes stop when multiple physical disks in storage space are disconnected in Windows Server 2012 R2
    2978101
          (http://support.microsoft.com/kb/2978101/            )    
    Windows 2012 R2-based Hyper-V host cluster freezes when virtual machines use shared virtual hard disks
    2978100
          (http://support.microsoft.com/kb/2978100/            )    
    URL of a web application is broken when you access the web application by using WAP in Windows Server 2012 R2
    2978096
          (http://support.microsoft.com/kb/2978096/            )    
    ExtendedProtectionTokenCheck setting keeps being disabled in AD FS 3.0 in Windows Server 2012 R2
    2976995
          (http://support.microsoft.com/kb/2976995/            )    
    You cannot access an SMB share that is located on a Windows 8.1 or Windows Server 2012 R2-based file server
    2976994
          (http://support.microsoft.com/kb/2976994/            )    
    Shared folder in Windows Server 2012 R2 or Windows 8.1 cannot be accessed by using SMB version 1 protocol
    2976965
          (http://support.microsoft.com/kb/2976965/            )    
    "Inaccessible, Empty or Disabled" message in Group Policy Results report for a remote computer in Windows Server 2012 R2
    2976946
          (http://support.microsoft.com/kb/2976946/            )    
    FindFirstPrinterChangeNotification can now request 3D printers in Windows 8.1 and Windows Server 2012 R2
    2976918
          (http://support.microsoft.com/kb/2976918/            )    
    You are prompted to re-enter credentials frequently when using Work Folders by using ADFS authentication in Windows 8.1
    2990532
          (http://support.microsoft.com/kb/2990532/            )    
    Browser icons cannot be unpinned from taskbar when you change the default browser in Windows RT 8.1 or Windows 8.1
    2983142
          (http://support.microsoft.com/kb/2983142/            )    
    Delay in turning on display when you reopen lid of a Windows 8.1 or Windows Server 2012 R2-based portable computer
    2982727
          (http://support.microsoft.com/kb/2982727/            )    
    PDF file pages are blank when you open the file by using a Windows Store app in Windows 8.1 or Windows Server 2012 R2
    2981650
          (http://support.microsoft.com/kb/2981650/            )    
    August 2014 OneDrive reliability update for Windows RT 8.1 and Windows 8.1
    2980756
          (http://support.microsoft.com/kb/2980756/            )    
    You cannot log on to an AD FS server when you use an alternative UPN suffix account in Windows Server 2012 R2
    2980665
          (http://support.microsoft.com/kb/2980665/            )    
    Windows crashes with Stop error when filter driver perform FltWriteFileEx function in Windows RT 8.1 or Windows 8.1
    2980661
          (http://support.microsoft.com/kb/2980661/            )    
    Update to add a new performance counter feature for Tiered Storage Spaces in Windows Server 2012 R2
    2980659
          (http://support.microsoft.com/kb/2980659/            )    
    0x0000007E Stop error when you enable ETW and storage spaces debug logs on a Windows Server 2012 R2-based computer
    2980345
          (http://support.microsoft.com/kb/2980345/            )    
    Desktop isn't shown correctly when network icon in notification area is changed repeatedly in 64-bit version of Windows

    Interestingly, Microsoft backpedaled on several of the updates involving font handling.
    See the "Known issues" section of this page:
    http://support.microsoft.com/kb/2982791
    I've no .otf fonts installed anywhere but in C:\Fonts but on the 18th I went ahead and uninstalled KB2982791 and KB2975719 anyway as Microsoft advised.  The other two, KB2970228, and KB2975331 were apparently never installed here.  This leaves
    13 of the August updates still intact.
    I can verify that the content of the font cache file that Microsoft suggests deleting is now different than before the deletion. 
    My system, which wasn't unstable with all the August updates in, apparently isn't unstable without KB2982791 and KB2975719 either (it's already crunched through many terabytes of data and it's run for nearly 7 days without a glitch), so for me personally
    this is all a bit academic.  However, in the grander sense, it's disturbing that Windows Updates now must be regarded with greater suspicion.  Perhaps when the next set comes out I should run them for a week or more in a VM before deciding
    to update my workstation.
    -Noel
    Detailed how-to in my eBooks:  
    Configure The Windows 7 "To Work" Options
    Configure The Windows 8 "To Work" Options

  • How do I back up one hard drive onto another hard drive regularly

    I have two Western Digital passport harddrives.  I am using a MacBook Pro. One with about 400 GB's of media on it and the other of which I want an exact copy.  For the first run, I am just drag/dropping the files but dont want to do this everytime I need updated backup.  Is there a solution for harddrive 2 to copy exactly what has changed on harddrive 1 without deleting 2, and dragging all the files from 1 back over? I am looking for a solution without the obvious "just drag files you chaged over"
    Thanks in advance

    use cloning software, .....see here for ALL options:
    however  "dragging over changed files" is what I do, and I have an ENORMOUS file collection.
    The easy way is to make fresh rotating HD CLONES  (see below)
    Methodology to protect your data. Backups vs. Archives. Long-term data protection
    Data Storage Platforms; their Drawbacks & Advantages
    #1. Time Machine / Time Capsule
    Drawbacks:
    1. Time Machine is not bootable, if your internal drive fails, you cannot access files or boot from TM directly from the dead computer.
    OS X Lion, Mountain Lion, and Mavericks include OS X Recovery. This feature includes all of the tools you need to reinstall OS X, repair your disk, and even restore from a Time Machine
    "you can't boot directly from your Time Machine backups"
    2. Time machine is controlled by complex software, and while you can delve into the TM backup database for specific file(s) extraction, this is not ideal or desirable.
    3. Time machine can and does have the potential for many error codes in which data corruption can occur and your important backup files may not be saved correctly, at all, or even damaged. This extra link of failure in placing software between your data and its recovery is a point of risk and failure. A HD clone is not subject to these errors.
    4. Time machine mirrors your internal HD, in which cases of data corruption, this corruption can immediately spread to the backup as the two are linked. TM is perpetually connected (or often) to your computer, and corruption spread to corruption, without isolation, which TM lacks (usually), migrating errors or corruption is either automatic or extremely easy to unwittingly do.
    5. Time Machine does not keep endless copies of changed or deleted data, and you are often not notified when it deletes them; likewise you may accidently delete files off your computer and this accident is mirrored on TM.
    6. Restoring from TM is quite time intensive.
    7. TM is a backup and not a data archive, and therefore by definition a low-level security of vital/important data.
    8. TM working premise is a “black box” backup of OS, APPS, settings, and vital data that nearly 100% of users never verify until an emergency hits or their computers internal SSD or HD that is corrupt or dead and this is an extremely bad working premise on vital data.
    9. Given that data created and stored is growing exponentially, the fact that TM operates as a “store-it-all” backup nexus makes TM inherently incapable to easily backup massive amounts of data, nor is doing so a good idea.
    10. TM working premise is a backup of a users system and active working data, and NOT massive amounts of static data, yet most users never take this into consideration, making TM a high-risk locus of data “bloat”.
    11. In the case of Time Capsule, wifi data storage is a less than ideal premise given possible wireless data corruption.
    12. TM like all HD-based data is subject to ferromagnetic and mechanical failure.
    13. *Level-1 security of your vital data.
    Advantages:
    1. TM is very easy to use either in automatic mode or in 1-click backups.
    2. TM is a perfect novice level simplex backup single-layer security save against internal HD failure or corruption.
    3. TM can easily provide a seamless no-gap policy of active data that is often not easily capable in HD clones or HD archives (only if the user is lazy is making data saves).
    #2. HD archives
    Drawbacks:
    1. Like all HD-based data is subject to ferromagnetic and mechanical failure.
    2. Unless the user ritually copies working active data to HD external archives, then there is a time-gap of potential missing data; as such users must be proactive in archiving data that is being worked on or recently saved or created.
    Advantages:
    1. Fills the gap left in a week or 2-week-old HD clone, as an example.
    2. Simplex no-software data storage that is isolated and autonomous from the computer (in most cases).
    3. HD archives are the best idealized storage source for storing huge and multi-terabytes of data.
    4. Best-idealized 1st platform redundancy for data protection.
    5. *Perfect primary tier and level-2 security of your vital data.
    #3. HD clones (see below for full advantages / drawbacks)
    Drawbacks:
    1. HD clones can be incrementally updated to hourly or daily, however this is time consuming and HD clones are, often, a week or more old, in which case data between today and the most fresh HD clone can and would be lost (however this gap is filled by use of HD archives listed above or by a TM backup).
    2. Like all HD-based data is subject to ferromagnetic and mechanical failure.
    Advantages:
    1. HD clones are the best, quickest way to get back to 100% full operation in mere seconds.
    2. Once a HD clone is created, the creation software (Carbon Copy Cloner or SuperDuper) is no longer needed whatsoever, and unlike TM, which requires complex software for its operational transference of data, a HD clone is its own bootable entity.
    3. HD clones are unconnected and isolated from recent corruption.
    4. HD clones allow a “portable copy” of your computer that you can likewise connect to another same Mac and have all your APPS and data at hand, which is extremely useful.
    5. Rather than, as many users do, thinking of a HD clone as a “complimentary backup” to the use of TM, a HD clone is superior to TM both in ease of returning to 100% quickly, and its autonomous nature; while each has its place, TM can and does fill the gap in, say, a 2 week old clone. As an analogy, the HD clone itself is the brick wall of protection, whereas TM can be thought of as the mortar, which will fill any cracks in data on a week, 2-week, or 1-month old HD clone.
    6. Best-idealized 2nd platform redundancy for data protection, and 1st level for system restore of your computers internal HD. (Time machine being 2nd level for system restore of the computer’s internal HD).
    7. *Level-2 security of your vital data.
    HD cloning software options:
    1. SuperDuper HD cloning software APP (free)
    2. Carbon Copy Cloner APP (will copy the recovery partition as well)
    3. Disk utility HD bootable clone.
    #4. Online archives
    Drawbacks:
    1. Subject to server failure or due to non-payment of your hosting account, it can be suspended.
    2. Subject, due to lack of security on your part, to being attacked and hacked/erased.
    Advantages:
    1. In case of house fire, etc. your data is safe.
    2. In travels, and propagating files to friends and likewise, a mere link by email is all that is needed and no large media needs to be sent across the net.
    3. Online archives are the perfect and best-idealized 3rd platform redundancy for data protection.
    4. Supremely useful in data isolation from backups and local archives in being online and offsite for long-distance security in isolation.
    5. *Level-1.5 security of your vital data.
    #5. DVD professional archival media
    Drawbacks:
    1. DVD single-layer disks are limited to 4.7Gigabytes of data.
    2. DVD media are, given rough handling, prone to scratches and light-degradation if not stored correctly.
    Advantages:
    1. Archival DVD professional blank media is rated for in excess of 100+ years.
    2. DVD is not subject to mechanical breakdown.
    3. DVD archival media is not subject to ferromagnetic degradation.
    4. DVD archival media correctly sleeved and stored is currently a supreme storage method of archiving vital data.
    5. DVD media is once written and therefore free of data corruption if the write is correct.
    6. DVD media is the perfect ideal for “freezing” and isolating old copies of data for reference in case newer generations of data become corrupted and an older copy is needed to revert to.
    7. Best-idealized 4th platform redundancy for data protection.
    8. *Level-3 (highest) security of your vital data. 
    [*Level-4 data security under development as once-written metallic plates and synthetic sapphire and likewise ultra-long-term data storage]
    #6. Cloud based storage
    Drawbacks:
    1. Cloud storage can only be quasi-possessed.
    2. No genuine true security and privacy of data.
    3. Should never be considered for vital data storage or especially long-term.
    4. *Level-0 security of your vital data. 
    Advantages:
    1. Quick, easy and cheap storage location for simplex files for transfer to keep on hand and yet off the computer.
    2. Easy source for small-file data sharing.
    #7. Network attached storage (NAS) and JBOD storage
    Drawbacks:
    1. Subject to RAID failure and mass data corruption.
    2. Expensive to set up initially.
    3. Can be slower than USB, especially over WiFi.
    4. Mechanically identical to USB HD backup in failure potential, higher failure however due to RAID and proprietary NAS enclosure failure.
    Advantages:
    1. Multiple computer access.
    2. Always on and available.
    3. Often has extensive media and application server functionality.
    4. Massive capacity (also its drawback) with multi-bay NAS, perfect for full system backups on a larger scale.
    5. *Level-2 security of your vital data.
    JBOD (just a bunch of disks / drives) storage
    Identical to NAS in form factor except drives are not networked or in any RAID array, rather best thought of as a single USB feed to multiple independent drives in a single powered large enclosure. Generally meaning a non-RAID architecture.
    Drawbacks:
    1. Subject to HD failure but not RAID failure and mass data corruption.
    Advantages:
    1. Simplex multi-drive independent setup for mass data storage.
    2. Very inexpensive dual purpose HD storage / access point.
    3. *Level-2 security of your vital data.

  • HT201250 How can I restore files using Time Machine to a new external drive?

    I need to restore all the files on a failed hard drive—upwards of a terabyte of data. How can I instruct Time Machine to copy the files to a new external hard drive? Thanks!

    Hold down the option key and select
              Browse Other Backup Disks...
    from the Time Machine menu in the menu bar. The menu icon looks like a clock running backwards. If you don't have that menu, open the Time Machine preference pane and check the box marked
              Show Time Machine in menu bar

  • My thoughts on testing DocumentDB

    Despite knowing DocumentDB won't be an option yet for my needs because of the lack of OrderBy and other known limitations in the Preview, I wanted to try it out and run some basic query tests against it to see what's already possible, how it performs, where
    it lacks features and if it would make sense to consider DocumentDB as a future replacement for my current combined database Azure (SQL Server + Table Storage) solution.
    I want to share my findings as a feedback on this preview.
    My scenario
    While the big picture is much more complex, for this post and my DocumentDB test, I reduced my app functionality to it's very basic requirement: Users can subscribe to news channels and have all articles of their subscriptions shown in a combined list. There
    are thousands of news channels available and users may subscribe to 1 to 100s or even 1000s of them, while 1-100 is the common range of subscriptions. The app has tagging for read/unread articles, starred articles and everything can also be organized in folders
    and users can filter their article lists by these tags - but I left all of these complexities out for now.
    DocumentDB architecture
    One collection for News Channels, one collection for Articles. I decided to split the channels from articles as there are some similarities in column names and this would have made issues in index design. I imported around 2.000 NewsChannel rows from my
    SQLDB and around 3 million articles, filling up the Articles collection to nearly 10 GB of data.
    A NewsChannel document looks like this:
    id - I took the int value from my SQL database for this
    Name
    Uri
    An Article document looks like this:
    id - I also took the int value from my SQL database for this
    NewsChannelId - in SQL DB, the foreign key from the NewsChannel table
    Title
    Content
    Published - DateTime converted to Epoch
    I put range indexes on id and Published as most of the time, I'd query for ranges of documents (greater than an id, newer than a specific published date, ...). I also excluded some columns from indexing, like Content in the articles document.
    Test 1 - Get newest 50 articles for a single news channel
    SELECT TOP 50 * FROM ArticlesWHERE NewsChannelId == [50]ORDER BY id DESC
    I knew this would fail due to the lack of OrderBy. I tried to find a solution by custom indexes, but there is no way to define an index to be organized in descending order so newest entries would always be returned first. This would be enough as I not really
    need ascending orders for articles, so this would have made up for the lack of OrderBy. But it does not seem to be possible.
    Result: Impossible
    Test 2 - Get newest 50 articles for all subscribed news channels of a user
    SELECT TOP 50 * FROM ArticlesWHERE NewsChannelId IN ([1, 6, 100, 125, 210, ...])ORDER BY id DESC
    This would be the most used query and it would have been very interesting to see how this could perform, but due it's similarity to Test 1, it's also not possible to do it. But a variant of it will be described in the next test (3).
    Result: Impossible
    Test 3 - Get any articles newer than a given article from all subscribed news channels of a user
    This was the first test where I hoped to get some results. Each article document has a range index on id and its Published date, so this should be fast and nice. I seemed to have failed to create the range index for id correctly as DocumentDB complained
    about id not being a range index - that sucked because to fix this I would have to recreate the whole database and re-import all data. But luckily, the index on Published was created correctly and it would do for testing this kind of query just as fine as
    the id.
    SELECT * FROM ArticlesWHERE NewsChannelId IN [1, 6, 100, 125, 210, ...]AND Published > [someDate]
    Unfortunately, I found out there is no "contains" query supported in DocumentDB that would work like a WHERE IN query in SQL. But if I want to query articles for all subscribed channels, I will have to pass a list of NewsChannel IDs to the
    query. That was really a surprise to me as something like this seems just as much as a base functionality like OrderBy.
    Result: Impossible
    Test 4 - Get any articles newer than a given article for a single news channel
    Just as test 4, but only for 1 news channel - so finally here, DocumentDB will support my needs.
    SELECT * FROM ArticlesWHERE NewsChannelId == [id]AND Published > [someDate]
    And yes, this works, and the performance seems OK. But to my surprise, even if this just returns 5 documents and the query is well supported by the range index on Published, this has extremely high RU costs - depending on the query, somewhere between 2.500
    and 6.000 in my tests - which would mean with 1 CU, I already will be throttled for such a simple query.
    Result: Possible, quite fast but insanely high RU costs
    Test 5 - Get a single article from a News Channel
    As expected, this works like a charm. Fast and with 1 RU cost per query.
    Result: Works great.
    Other stuff I noticed:
    For my scenario, I see no way to scale my DocumentDB. I already reached the limit of a single collection with only a fragment of all my data. I would need to do the partitioning myself, for example by having a single collection for each NewsChannel, like
    I did in TableStorage where the NewsChannelId is the partition key - but due to the collection number limitations and even more due to the limitied query capabilitis INSIDE a single collection, I see now way how I could do performant queries if I would need
    to query multiple, maybe even hundreds of different collections in one query.
    Even if the space limit of a single collection would be raised to terabytes of data, I see the issue that I will run into serious performance problems as, as I understand, a collection can always be only on a single node. To support more load, I will be
    required to split my data over multiple collections to have multiple nodes, but then again, this would not support my query needs. Please correct me if I'm seeing something wrong here.
    Wrap up
    Seeing that even my most basic query needs cannot be supported by DocumentDB right now, I'm a bit disappointed. OrderBy is coming, but it won't help without WHERE IN queries and even with them, I still don't know if this is something that will perform good
    in combination and what, in such cases, the RU costs will look like if simple range queries with a small amount of documents returned already cost that much.
    I'm looking forward what's happening next with DocumentDB, and I really hope I can replace my current solution with DocumentDB at some point, but currently, I don't see it. A good fit for me would be MongoDB, but it's not PAAS and it's hard and resource-intensive
    to host ... so DocumentDB looked very nice at first sight. I really hope those issues will be resolved, and they will be resolved soon.
    b.

    Hi Aravind,
    thank you very much for your detailed response.
    Test 1: That's a good idea for a workaround, although it would get complicated when I want the top 50 documents from all subscribed news channels, which can be 100 or more (Test 2). The index documents can also get pretty large which might bring me to the
    limit of a single document, needing to split it on multiple documents for a single news channel. However, for a proof of concept implementation with DocumentDB, this will do fine. I might try that :)
    Test 2: Yes, but the ORs are limited to a maximum of 5 (?) currently, so not really an option as I need more most of the time.
    Test 3: I will have a look at this and see how that performs!
    Test 4: I used a classic UNIX epoch timestamp (seconds since 1970) and I also used a precision of 7 for the index. See below the code I used to create the index. So I think this should be OK. However, I'm glad to share the details of my account and
    a sample query so you can have a look for yourself. I will contact you by Mail with details
    articleCollection.IndexingPolicy.IncludedPaths.Add(new IndexingPath
    IndexType = IndexType.Range,
    Path = "/\"InsertedEpoch\"/?",
    NumericPrecision = 7
    As for partitioning - thanks for the article. For me, a fan-out on read strategy would be required if I would do my partitioning by News Channel ID ... but that's what giving me headaches. Given that it is not uncommon a user of my app has 100 or more
    subscriptions, I would need to issue 100 parallel queries. I tried something like that for Azure Table Storage and found it to be a performance nightmare. That's why I currently use Table Storage as a pure document store but still do all computations of the
    requested articles in SQL Server. But yes, I might have to put more thought into that and see how I can squeeze out the performance I need. Because SQL Server is fine and I can do a lot with indexes and indexed views to support my query scenarios - but
    it has also it's scalability limits and the reason it still works good is that my App is in a testing/beta state and does not have the amount of data and users it will have when it is finally live. That's the reason I am searching for a NoSQL solution like
    DocumentDB that should support my needs, mainly for scale-out, better.
    Thanks again for your response and your suggestions, with that, I might be able to do a basic proof of concept implementation that supports my core features for some testing with DocumentDB and see how it's doing.
    I will contact you per mail for the RU test data
    Happy new year! :)
    Bernhard

  • HT5550 Locating frequently used audio in a single Event in FCPX 10.1?

    I have a bunch of audio files that I regularly use to score my podcasts. I am just starting to figure out this new 10.1 update. Instead of copying them into each new "Event" (or "Project" now that it appears the project files are contained within the event files), it seems like it might be easier to create an "Event" that houses all of those audio files and then just link to them.
    I understand that doing this means that if I ever move the audio "Event" that EVERY other event or project that references it will now have to be redirecting/relinked to the updated location, but honestly, I have not ever moved the source folder for the audio. I have organized my system so that all of the audio is contained in an easily accessible folder on my external hard drive. It seems like a good idea to have all of the audio contained in an easily accessible "Event" in the new FCPX 10.1.
    Am I right? Is there a flaw in my thinking? Let me know if you need more info to answer the question.

    that's hat I'm doing here for quiet a while (before10.1) :
    I have some 'standards' for my weekly sports-reports, all collected in some extra Event 'Standards'.
    Now with FCPX10.1., I' looking forward to create such an Event in each 'Season'-Library, but without importing the files = to keep'em where they are now, on some ext. HDD. On edit of each project, I can switch inside the Lib to that Event, apply clips as needed.
    When season is over, I'll close that Lib, create the next, create again an 'Standards'-Event, ... - and no extra space on my 'working' hard-drive...
    I haven't done that - the whole process of 'updating' my terabytes of data is some nice&cozy holiday-work
    For my needs, these new 'Libraries' are perfect:
    one season = 1 lib, 1 game = 1 Event with 3 - 6 cameras, 1 report = 1 project
    one fast harddrive = actual game, one slow harddrive = 'standards', another slow one = final export - easy!
    (plus a couple of extra drives for back-ups)

Maybe you are looking for

  • Re: 970 Gaming is quite hot

    Go to intel graphics control panel, select power, select max performance for the plugged in and on battery. Also disable power saving and automatic display brightness.

  • Parameter Id FWS isn't working

    Hi, We have maintained Parameter Id FWS with the value USD,as per my knowledge when this parameter id is maintained for the user  and when ever he goes to the MIRO transaction code the the value maintained should be copied to the field WAERS.But this

  • RoboHelp permissions

    Hi everyone, Our company has a very strict policy in terms of employees' permissions on their workstations. Unfortunatelly it appears users needs local admin rights not only to install but to run RoboHelp 6.0. I contacted the Adobe support and here i

  • Systems with CSA not boot

    After update from 6.0.1 to 6.0.2 some systems with CSA cun`t boot. If manualy delete agent (in safe mode) - everething ok. Few systems butable, but agent process takes more then 90% processor resourses. Logs on CSA MC are clean Any ideas? Manual unin

  • Yes I know this has been covered but not answered

    Hi I have an IMac that is only a few weeks old with fully updated software on all levels. Ok so I get that I had to go buy an external CD drive so that I can be old fashioned and still burn music from my CD's into ITunes.  I get that I will have to r