Server storage sizing for PI7.1

Hello,
What should be recommended HDD storage on a development / production PI7.1 server? Whether it is possible to control the log created by PI7.1 on the Oracle database server?
Regards,
Vandana.

> What should be recommended HDD storage on a development / production PI7.1 server?
Did you do a sizing with your hardware vendor?
> Whether it is possible to control the log created by PI7.1 on the Oracle database server?
What do you mean with "control the log"?
Markus

Similar Messages

  • DMS Server storage sizing

    Hi All,
    We have configured a DMS server 2 years back for  test purpose which works fine.
    The free hard disk space on this server is only 4 GB and it is a windows based system.
    The RAM is 9 GB.
    We are planning to go for a full fledged DMS usage, I need your recommendation on the
    system sizing.
    Please share your expriences as I am too sure about the data growth.
    Regards,

    HI Ashutosh,
    Download  the  Content server Installation Guide and read the section
    Points to Consider Before Installation
    You will get idea of required DB space.
    Regards,

  • Exchange Server 2013 - Sizing

    Can some please provide me Exchange server 2013 sizing for 180 Mailbox users with mailbox profile size 10 GB With 1 DAG with two mailbox servers.
    Please provide for Exchange Server in Virtual environment.
    I have already created a setup on Environment going through test mode.
    Existing Environment
    2 X CAS servers -  2 Core - 4 GB Memory - 500 GB Hard disk space
    2 X Mailbox Server - 2 Core -  12 GB Memory - 1.5 TB disk space
    Theses servers are build on VMWare hosted on ESX server
    Please provide your feedback for best practice and solution to support 180 Mailbox profile with Mailbox size limit 10 GB
    Thanks in adavance
    Benhur
    benhur007

    Hi Benhur,
    we have over one thousand mailboxes ( inc. room,shared,calendar and so on).We only have 2 servers With Mailbox and CAS role.No need to seperate those roles if you really dont have to.2013 CAs only Proxy/redirect requests and nothing is stored (other that
    daily performance log and iis log).C: drive for Exchange install i recomend 70gb on each server.
    When it comes to mailbox database,are you planning to use Public folders?In 2013 Public folders can be stored on seperate databases.
    Also it is not recomended to have Exchange and mailbox database on same physical disk,so you should seperate those out and expand over time without taking Down the server.
    In easy sample the server can look like this:
    C: Install Exchange  70GB
    D: Database files 1GB ( here you map multiple databases in different folders,so it look like D:\DB01,D:\DB02,D:\DB03 and so on.And each folder is an own 300 gb disk that you Mount into one of these folders at time.Path and name have to be identical on both
    servers when setting up DAG.)
    E: Log files 1GB ( here you map multiple databases in different folders,so it look like E:\LOG01,E:\LOG02,E:\LOG03 and so on.And each folder is its own 70 gb disk that you Mount into one of these folders at time.Path and name have to be identical on
    both servers when setting up DAG.)
    Approx 300GB each database and 70 gb each log(for each database) should be fine.
    10gb database pr user can cause performance issue if they are working online (Citrix or other apps),if cache locally it might take time if they are not inside worknetwork.
    Dont forget that you should also have your own replication network or it will consume bandwidth when you move over multiple users to different databases.
    Hope this helps!
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you. Thank you! Off2work

  • Storage issues for Proxies using Final Cut Server

    Hi there,
    we have a fairly high amount of material that is just being put into Final Cut Server to be archived again.
    I dont mind the Xserve being busy to create these proxy files, but they tie up too much space!
    (Maths: 500 GB / 40 h of DVCAM footage result in more than 100 GB proxy files).
    We have those 40 h running though more than once a month plus a whole LOT of material from the last 2 years - and our Xsan is only 7 TB in size.
    Although we could theoretically buy another fiber raid this solution is not really future proof - it just pushes the time when we have to buy the next one a couple of months forward.. on top of that I cannot afford to have expensive, fast fiber channel storage used for proxies of files that are long archived and have only very limited use (and IF we need them, stick in the archive device and done).
    Any ideas how to get rid of proxy files from archived assets?
    I dont really want to take pen and paper and delete the proxies of the files by hand from the bundle.. dont think FCSvr will like this either.
    thanks for any advice
    tobi

    So I'm not sure how your math is adding 100GB of proxy files
    Are you creating VersionS and/or Edit Proxies of everything?
    I ask because using the default Clip Proxy setting gives you file sizes similar to the ones below. These numbers aren't accurate because the default Transcode setting uses Variable Bit Rate (VBR) encoding for both video and audio, but assuming you had a relative constant 800kbps stream here's how large your Proxies.bundle file should be
    800kbps * 30secs = 2.4mb
    800kbps * 60secs = 4.8mb
    800kbps * 60secs * 60min = 280.8mb per hour
    280.8mb per hours * 40= 11.2GB
    Also note, that deleting an asset from FCSvr doesn't delete the proxy files so you could have a lot of proxies left over from a few historical scans.

  • Server Sizing for OCS

    Does anyone / Oracle have a Server Sizing
    document for OCS and/or individual modules
    [Files, Email, Calendar etc].
    Looking at sizing for 100 users, 1000 users,
    3000 users.
    Thanks
    Hemant

    Contact you local Oracle Collaboration Suite Product Manager,
    they have a really cool online (Internal only) sizing tool for each component of OCS. This question has been asked many times
    by various members
    Cheers
    Justin Miles

  • Server Sizing For Oracle Database

    Hi All,
    I need a server sizing for the below mentioned architecture:
    This application is basically for logistics company which we are planing to host it centrally with two server's one server for application and one for oracle database along with DR site (Other Location). There are four locations and each location will have 20 users who are going to access this application (20 x 4= 80 Users). We are using MPLS network of 35 mbps bandwidth.
    1. Application server: Windows server 2008 R2
    2. Database Server: Windows server 2008 R2, Oracle 11g r2
    I need a server sizing documents.
    Thanks........

    EdStevens wrote:
    Justin Mungal wrote:
    EdStevens wrote:
    user1970505 wrote:
    Hi All,
    I need a server sizing for the below mentioned architecture:
    This application is basically for logistics company which we are planing to host it centrally with two server's one server for application and one for oracle database along with DR site (Other Location). There are four locations and each location will have 20 users who are going to access this application (20 x 4= 80 Users). We are using MPLS network of 35 mbps bandwidth.
    1. Application server: Windows server 2008 R2
    2. Database Server: Windows server 2008 R2, Oracle 11g r2
    I need a server sizing documents.
    Thanks........I'd seriously reconsider hosting Oracle db on Windows. Obviously there are many, many shops that do. And obviously it is often a case of the fact that they do not have (and choose to not acquire) expertise in Linux. But I've been in IT for 30+ years and have worked on IBM S-370 and its variants and descendents, Windows since v3, DEC VMS, IBM OS/2, Solaris, AIX, HPUX, and Oracle Linux. The first Oracle database I ever created was on Windows 3.11 and at that point I had never seen *nix.  Now I am in a position to state that Windows is the worst excuse of an operating system of any I have ever used.  I am constantly amazed/amused by how often (at least once a month on schedule, plus unplanned times) that our Windows SA has to send out a notice that he is re-booting his servers.  I can't remember the last time we had to reboot a Linux server ( I have 4 of them)
    Yes, I'm biased away from Windows, but that bias comes from experience. Hardly a day goes by that I don't see something that causes me to say to whoever is in earshot "have I told you how much I hate Windows?"I was going to refrain from commenting on that, as I assumed they're a Windows shop and aren't open to any other OS (but my assumption could be incorrect).
    I haven't been working in IT for as long as many of the folks around here, only about 10 years. I'm a former system admin that maintained both Linux and Windows servers, but my focus was on Windows. In the right hands, Windows can be rock solid. If a system admin has to reboot Windows servers often, he is most likely doing something wrong, or is rebooting for security updates. It's never as simple as "Windows Sucks," or "Linux Sucks;" it all depends on who's running the system (again, in my opinion).
    I have seen some windows servers run uninterrupted for so long no one could remember the admin password. But more often memory leaks and the "weekly update" (replacing last weeks bugs with this weeks) is the culprit.
    Yes, it really is sad how often you have to reboot for updates if you want to keep your system current. Mind you, it's better to have the fixes then to not have them (maybe). I rebooted my servers about once every month at my old place... which is not that bad.
    With that said, in my experience, Oracle on Windows is a major pain. It takes me much longer to do anything. Once you get proficient with a CLI like the bash shell, the Windows GUI can't compare.Agreed. One of my many complaints about Windows is the poor excuse of a shell processor. I'm pretty proficient in command line scripting, but still cringe when I have to do it. Practically every line of code I write for a command script is accompanied by the remark "this is so lame compared to what I could do with a shell script". Same for vi vs. notepad. But my real problem is the memory leaks and the registry. I'm fairly comfortable hacking certain areas of the registry, but the need to and the arcane linkages between different areas of the registry and how they influence 'process environment' remains a mystery to all but a tiny minority of admins. Compare to *nix where everything is well documented and "knowable". 
    One (of many) anecdotal experiences, this with my personal Win7 laptop. One time it crashed and refused to reboot. A bit of a google search turned up some arcane keystroke sequence to put it into some sort of recovery mode on bootup .. similar to getting into the bios, but the keystroke sequence was much more complex .. it may have involved standing on one foot while entering the sequence. Anyway, it entered a recovery process I've never seen before or since and repaired everything. My first thought was "hey, that was pretty cool." Then my second thought was 'but only Windows would need such a facility.
    Bottom line? To paraphrase a famous Tom Hanks character, "My momma always said Windows was like a box of chocolates. You never know just what you'll get."Haha... I like that one. Yes, the registry is definitely horrible. It's amazing to me that a single point of failure was Microsoft's answer to INI files.
    I think Windows and nix have their places. Server work definitely seems more productive to me in a nix environment, but I think I'd jump off a cliff if I had to use it as my desktop environment day-in-day-out. The other problem is application lock-down; I can't blame the OS for that, but it's a reality... and using virtualization to run those applications seems to defeat the point to me.

  • Proper server sizing for SPARC/Solaris to Intel/R migrations

    Is there any collateral available concerning proper server sizing for customers migrating from SPARC/Solaris to Intel/RHEL? I was just asked by our account team that is working with a very large custormer in the pharma space.
    My initial knee jerk was to refer them to the OEM but thought I'd look around in these forums first.
    My searches have not returned any good results yet.

    Hi Mike
    the public SAP SD-Benchmark homepage ([http://www.sap.com/benchmark/|http://www.sap.com/benchmark/] -> SD two-tier results) gives interesting but very indirect sizing hints. There are several SAP SD-Benchmarks on SPARC/Solaris. I picked some of them:
    [2008075|http://download.sap.com/download.epd?context=B1FEF26EB0CC3466320983FEDBEB2A47615DC8F24866B0B6870E95AA3589B515566F7A57D98961BB]: 24650 SD-User w SAP ERP 6.0 (2005 Non-Unicode) on SPARC Enterprise Server M9000 (32CPUs)
    [2008062|http://download.sap.com/download.epd?context=40E2D9D5E00EEF7C974D9FC4D21DA80E794CB437BC196262799B27A2BB01B9D6]: 825 SD-User w SAP ERP 6.0 (2005 Non-Unicode) on SPARC Enterprise Server M3000 (1CPU)
    [2008058|http://download.sap.com/download.epd?context=40E2D9D5E00EEF7CB82813BD3F95797BAEB80527B36EC026E03386D659E4DE48]: 7520 SD-User w SAP ERP 6.0 (2005 Non-Unicode) on SPARC Enterprise T5440 (4CPUs)
    You'll probably have the hardware setup of your customer together with the system utilization. Check it with the Solaris/SPARC benchmarks on the public page and check for Intel Benchmarks as well. Make sure, that the SAP release is the same (with 2009 a new SAP release is mandatory for benchmarks -> numbers are not comparable to 100%).
    Thanks,
      Hannes

  • SQL Server Sizing for Lync Server 2013

    Hi,
    We are planning to install Lync Server 2013 with 800 users, but I can find any information about the SQL Server instance sizing.
    Can anyone give me an estimation or prior experience with similar installations? I need to know Disk size, memory an CPU number.
    Thanks in advance.

    There are a few variables to consider.  Are you turning on monitoring, archiving, and is enterprise voice enabled?  Is persistent chat enabled?  Is it virtualized?  Is this Lync enterprise edition or standard with a portion of the
    databases hosted on full SQL?  Are you required to retain any information or logs for a specific amount of time?  Are you keeping SQL backups local with maintenance plans and for how long?
    I would suggest not going below 16GB for a SQL Server, but you may not need much more.  If it's virtual I'd throw 4 cores at it at least and disk space will vary based upon what you're doing and how long you want to keep things.  Thick provision
    those drives and make sure the logs and databases are on separate partitions.  
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications

  • Importing a pkg with rely on server storage and roles for access control

    Hi we run std 2008 r2.  I'm reading documentation on prot levels during pkg import to catalog at
    https://msdn.microsoft.com/en-us/library/ms141747(v=sql.105).aspx but unfortunately the definition of prot level "rely on server storage and roles for access control"
    isn't clear.  They used the prot level name to define it which didn't help me.
    This option looks appealing but it isn't clear why I need to enter a pswd when choosing this option.  Will my peers need to know that pswd when they export?  Will the sql agent job need to present that pswd when running?  If I just keep current
    prot level "encrypt with user" will the agent job be able to run it?  I'm sure it (agent) isn't running with my creds now.  Also, how can I tell what prot level it was deployed with last?  I rt clicked on the pkg in the catalog
    and don't see anything obvious about that.  I already understand that on export prot level is changed to encrypt with user. 
    I'm going to look at the sql agen job right now to see what creds it runs with.

    First thing to understand is that protection level is used for determining how package (dtsx) file have to be protected. Once package is deployed in server and executed from agent, the conventional way is to use method of configurations or parameters if
    2012 to get required connection etc values and execute using it. It never uses the values that were set during the design time. So it doesnt matter what protection level was so far as its based on config
    However if you're planning to export existing package to your system and do modification thats where protection level comes to play. If its set to any of ENcryptSensitive... type value then you'll to provide the value (either a passowrd or your userkey which
    it takes automatically from login info) to see the sensitive info (connection info,passwords etc) The package will still open and so far as you manually type in missing values you will be able to execute the package. If protection level is set to one of ENcrptAll
    then you will have no way to open package itself unless you provide password/ have correct userkey.
    The rely on server storage option uses sql server security context itself ie it doesnt do any encryption within package by itself but will assume values based on sqlserver security. This is used when you store package itself in SQLServer itself (MSDB)
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • SQL Server 2012 AlwaysOn for Multi-subnet geographical HA solution steps -- NON-Shared storage,standalone servers

    1.Can any one provide the detailed steps for multi-subnet HA for Always ON Groups.
    --SQL Server 2012 AlwaysOn  for Multi-subnet geographical HA solution steps
    2.Do we need VLAN or not for SQL Server 2012 on win 2012 ? provide details for this VLAN required or not.
    --I read MS links, sql server 2012 and above VLAN not required.
    Env:
    SQL Server 2012
    Windows 2012 R2(2  servers different location)
    Non-Shared storage (stand-alone servers)
    Always ON Availability Group
    I have seen white papers,but did not have detail step by steps.
    Thanks

    Hi SQLDBA321,
    As your post, SQL Server 2012 or higher version has removed that requirement of virtual local area network (VLAN). For more details, please review this similar blog:
    What you need for a Multi Subnet Configuration for AlwaysOn FCI in SQL Server 2012.
    And you can perform the steps in the following similar blog to set up an AlwaysOn Availability Group with multiple subnets.
    http://www.patrickkeisler.com/2013/07/setup-availability-group-with-multiple.html
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Recommended sizing for ZLM server/infrastructure

    Hi,
    I hope this is the right category to place this question.
    I was wondering if any of you does have some recommendation according to the correct sizing for a ZLM infrastructure meaning how large would be a typical management zone when using the postgres db shipped with ZLM?
    What type of hardware are you running your ZLM server software on?
    Field reports welcome :)
    Best regards
    --Daniel

    Daniel ,
    we have just one management zone with a primary and a secondary at the backup data center. We managed sles 9/10 (soon 11) and RH4 and a bunch of third party software.
    The package repository is currently at 35 GB and the database is at 3 GB. The object store is just about 100mb so no need to think about that. that will not grow very much when adding new systems or patches.
    What I would take care of is that the database and package repository are located on a logical volume that you can resize online in case you need to expand it. We use reiser for that. The datbase as you can see is also not so big. But when you add new distributions (SLES 11 or so) you might need another3 GB for 32 and 3GB for the 64bit version. Then when then number of updates increase you need even more. Then later on for SP1 again about 5-6GB for both architectures ....
    Rainer

  • STORAGE SIZING - running R/3 ECC 9 yrs - how to estimate NW2004s 7.0 need

    Does anyone have some metrics they use when determining storage requirements if one were to plan on installing almost all of the NW2004s product suite.
    Example ---
    <u>BACKGROUND DATA</u>
    We've been running ECC 5.0 (previously 3.1H, 4.6C) for 9 years.
    Total "PRD" storage usage for ECC = 1.5tb
    Total storage available in our environment = 8.6tb
    We traditionally have an additional 2 full copies of PRD data for QA/TST etc.
    % of PRD storage use against total available = 17.4% (1.5tb/8.6tb)
    Our ECC 5.0 grows approximately 350gb per year (with archiving active)
    <u>FUTURE STATE</u>
    We plan to implemnt BI, CRM, SCM, SRM, MDM, KM, XI, APO
    Need to plan out 3 yrs storage requirements for these additional components.
    Yes, we realize "IT DEPENDS" but need to make some assumptions and put a stake in the ground.
    Does anyone have some storage metics where
    CRM storage = x% of R/3 storage
    BI storage = x% of R/3 storage
    SCM = x% of R/3 storage
    SRM = x% of R/3 storage
    etc.
    Any feedback is greatly appreciated......
    Regards

    Hello Doreen!
    Yes, it DOES depend...
    CRM etc.: No idea. We dont have them.
    BI 7.0 starts with some 30 GB (new system) and grows into infinity, depending on the creativity and detail requirements of your BI people. Can be much larger than production ERP data, we hear. My tipp: Always keep HUGE room to grow for the BI database, the increments are not by hundreds of MB, but by dozens of GB, if you are lucky... And our guy did never tell when they began to load a new cube...
    Buying storage for 3 years seems a bit long for me, as disk sizes double so often... Our collegues are now migrating out the 36 GB drives they put in in 2002, and 72 GB are the next ones to go. And the 600's are ante portas...
    What storage system do you have?
    Server sizing: Dont ask me. We alwas had luck to take it one size larger than "they" sized. But that depends on your workload growth rates.
    HTH, Rudi

  • What's the best storage solution for a large iLife? RAID? NAS?

    I'm looking for an affordable RAID storage solution for my Time Machine, iTunes Library, iMovie videos, and iPhoto Library. To this point I've been doing a hodgepodge of external hard drives without the saftey of redundancy and I've finaly been bitten with HD failures. So I'm trying to determine what would be the best recommendation for my scenario. Small Home Office for my wife's business (just her), and me with all our media. I currentlty have a mid-2010 Mac Mini (no Thunderbolt), she has an aging 2007 iMac and 2006 MacBook Pro (funny that they're all about the same benchmark speed). We have an AppleTV (original), iPad2 and two iPhone 4S's.
    1st Question: Is it better to get a RAID and connect it to my Airport Extreme Base Station USB port as a shared disk? OR to connect it directly to my Mac Mini and share through Home Sharing? OR Should I go with a NAS RAID?
    2nd Question: Simple is Better. Should I go with a Mac Mini Server and connect drives to it? (convert my Mac Mini into a server) or Should I just get one of those nice all-in-one 4-bay RAID drive solutions that I can expand with?
    Requirements:
    1. Expandable and Upgradeable. I don't want something limited to 2TB drives, but as drives get bigger and cheaper I want to easily throw one in w/o concerns.
    2. Simple integration with Time Machine and my iLife: iTunes, iMovie, iPhoto. If iTune's Home Sharing feature is currently the best way of using my media across multiple devices then why mess with it? I see "DLNA certified" storage on some devices and wonder if that would just add another layer of complexity I don't need. One more piece to make compatible.
    3. Inexpensive. I totally believe in the "You Get What You Pay For" concept. But I also realize sometimes I'm buying marketing, not product. I imagine that to start, I'm going to want a diskless system (because of $$$) to throw all my drives into, and then upgrade bigger drives as my data and funds grow.
    4. Security. I don't know if its practical, but I like the idea of being able to pop two drives out and put them in my safe and then pop them back in once a week for the backup/mirroring. I like this idea because I'm concerned that onsite backup is not always the safest. Unfortunately those cloud based services aren't designed for Terabytes of raw family video, or an entire media library that isn't wholey from the iTunes Store. I can't be the only one facing this challenge. Surely there's an affordable way to keep a safe backup for the average Joe. But what is it?
    5. Not WD. I've had bad experiences with Western Digital drives, and I loathe their consumer packaged backup software that comes preloaded on their external drives. They are what I meant when I say you get what you pay for. Prettily packed garbage.
    6. Relatively Fast. I have put all my media on an external drive before (back when it fit on one drive) and there's noticeable spool-up hang time. Thunderbolt's nice and all, but so new that its not easily available across devices, nor is it cheap. eSata is not really an option. I love Firewire but I'm getting the feeling that Apple has made it the red-headed step-child of connections. USB 3.0 looks decent, but like eSata, Apple doesn't recognize it exists. Where does that leave us? Considering this dilemma I really liked Seagate's GoFlex external drives because it meant I could always buy a new base and still be compatible. But that only works with single drives. And as impressive as Seagate is, we can't expect them to consistently double drive sizes every two years like they have been -cool as that may be.
    So help me out without getting too technical. What's the best setup? Is it Drobo? Thecus? ReadyNAS? Seagate's BlackArmor? Or something else entirely?
    All comments are appreciated. Thanks in advance.

    I am currently using WD 2TB Thunderbolt hard drive for my iTunes, which i love and is works great.  i am connected directly to my Mac Book Pro. I am running low on Memory and thinking of buying a bigger Hard drive.  My question is should I buy 6TB thunderbolt HD or 6TB NAS drive to work solely for iTunes.  I have home sharing enabled for my Apple TV 
    I also have my time capsule connected just as back up only.   

  • Can I assign several physical storage locations for each virtual machine when using the replication-feature from Hyper-V 2012 R2?

    Hi everyone,
    I have 2x physical servers running Hyper-V 2012 R2. Each hosts several virtual machines. The VHDs of the VMs are stored on several dedicated physical disks to have a performance boost. For exampe if VM A has two VHDs attached I made sure that the VHDs are
    on different physical disks to have them not slow-down each other in case of intensive disk accesses.
    So far so good. I was looking forward to the replication-feature. The idea is to have the two physical servers have their primary running VMs being replicated to the other physical server and vice-versa. I was hoping to have the chance to choose for each
    individual VM where the replicated VHD will be stored. But instead I can only see the one location/path which is configured in Hyper-V Manager when I activate the replication-feature on the server.
    Is there by any chance a way how to select the storage location for each VHD/VM if using the replication-feature of Hyper-V 2012 R2?
    Thanks in advance.
    Cheers,
    Sebastian

    Secondly, you could replicate different VMs to different storage locations to perform some of the disk balancing you are trying to perform.  Lastly, you could copy the vhd file to a different location
    before starting the VM.
    .:|:.:|:. tim
    Hi Tim,
    thanks for the reply. Sorry, but I had some other tasks to take care of, so I wasn't paying enough attention to this thread.
    The part I quoted from your reply sounds exactly like the action I'd like to perform, but as you pointed out before this should not be possible.
    How can I perform the action (replicating each VM to a storage location) as you mentioned secondly? To sum it up again:
    2x physical machines carrying severel HDDs
    8+ VMs spread to run on the 2x servers
    when setting up replication I can only set the storage-location from server A on B and vice versa B on A
    Thanks again for your reply.
    Cheers,
    Sebastian

  • Best storage solution for collaborative editing between two editors

    Hello!
    I was wondering what would be a recommended hardware storage solution for collaboratively editing Final Cut Pro X Libraries. Our current process revolves us copying all the video footage onto a portable drive and copying it over to each others machine. This is a time consuming process and the process gets broken up if one of us isn't in the office. We are working only with 1080p (and less) footage and both machines are MacBook Pro's (both have Thunderbolt connections).
    I've dug around the internet and found network solutions (setting up another machine as a server w/ the storage drive there) but that setup seems like a lot of work.
    Is there another way to do it with a single shared drive, no additional computer setups that would allow us to edit videos concurrently? I'm not looking for a scalable solution, just something to allow us to edit from the same pool of  footage at the same time.
    Thanks,
    Aaron

    If you want to edit concurrently, you are looking for a networked solution: either a drive on a server, like you mentioned, or a NAS (a drive connected directly to the network) or a SAN (storage area network, which is a more sophisticated, professional setup - that would likely be overkill in this case).
    In any case, you need a fast network - at the very least gigabit ethernet. There exist NAS options for a decent price.
    In this kind of setup, you would save all your media in the NAS, work on your libraries locally, and use "external media" - so the media would stay in the designated places in the NAS, and the libraries point to it. Libraries will be fairly small, and you could pass them around.
    In any case, you would share the media concurrently, NOT the libraries.

Maybe you are looking for

  • Invoice number on F150 dunning letter

    Hi, We need to have invoice number on the dunning letter on the trans code F150. May I know what are aspects or points we need to suggest to Abapers form the FI point of view Thank you Chaithra

  • 11.5.10.2 - Why So Many Apache Bounces

    Hello. We have been struggling for years between 11.5.9 and 11.5.10.0 and 11.5.10.1 and 11.5.10.2 to get updates to properly appear on the self-service home page after changes to functions, menus, profile options, and even added responsibilities. Wha

  • [CS3 JS] How to force XMP metadata update for a thumbnail?

    Hi Folks, In a JavaScript in Bridge CS3, I'm updating a value in the XMP metadata for a Thumbnail. Later in the script I want to read the updated XMP metadata information from the Thumbnail, but often Bridge hasn't updated the metadata by the time I

  • MIRO document number created

    All, I have a PR that was created in USD and adopted to a PO in EUR.  When I attempt to voucher an invoice against the PO in MIRO when I hit save no document number is created.  In MIRO the system is attempting to pay in EUR and have a green light fo

  • Edit addresses saved in Find My Friends

    When I open the Find My Friend app it shows me 4 addresses.  2 were custom-saved by me, and the other 2 both displays "Home" However, both "Home" addresses are wrong (I might have saved them incorrectly before). Now I'm trying to delete and re-enter