Real world experience: Is FCE 3.5 usable on a MacBook?

I've got a college freshman looking for a MacBook, but he uses FCE HD and will be upgrading to FCE HD 3.5 but is it usable on a MacBook. Has anyone actually used it?
The graphics hardware does not meet the minimums specified by Apple if I read them correctly. The Apple store definetly made me aware of it.
Any first or second hand knowledge appreciated. Thanks, Mark

Apple support is still holding that Apple does not support FCE on the MacBook. They also stated that they can't get it to run. I personally don't their that their telling the truth about they can't get it to run. If they are then everyone who says it loads and runs great is not telling the truth.
I'm going to check the Intel specs but other than memory I thought the 950 acts the same as a AGP card. Several have stated that profiler does show it's compatible with Quartz Extreme. I plan to check this out my self.
Apple should make an open statement that they don't support it and also tell us why. We want no BS, Apple just justify why you will not support it.
All of that said has anyone use FCE 3.5 on a MacBook with an external display?
I've got a college freshman looking for a MacBook,
but he uses FCE HD and will be upgrading to FCE HD
3.5 but is it usable on a MacBook. Has anyone
actually used it?
The graphics hardware does not meet the minimums
specified by Apple if I read them correctly. The
Apple store definetly made me aware of it.
Any first or second hand knowledge appreciated.
Thanks, Mark
iMac G5 20/   Mac OS X (10.4)  
iMac G5 20/   Mac OS X (10.4)  

Similar Messages

  • Anyone have some real-world experience with clustering and RMI?

    We are getting ready to do some stress testing on a clustered WLS RMI
              environment. Does anyone have any real world experience with this type
              of environment. I'm looking for information related to performance
              (transactions per second), how does it scale as you add new instances
              (does it get to a point where another instance actually degrades
              performance), how easy is it to administer the cluster?
              Any input would be greatly appreciated,
              thanks,
              Edwin
              

    Edwin:
              You might also want to refer to my earlier posting in this newsgroup re:
              setting an optimal value for executeThreadCount and percentSocketReaders
              on the client side in a cluster setup. Look for the subject "variable
              performance in a cluster".
              Srikant.
              Edwin Marcial wrote:
              > We are getting ready to do some stress testing on a clustered WLS RMI
              > environment. Does anyone have any real world experience with this type
              > of environment. I'm looking for information related to performance
              > (transactions per second), how does it scale as you add new instances
              > (does it get to a point where another instance actually degrades
              > performance), how easy is it to administer the cluster?
              >
              > Any input would be greatly appreciated,
              >
              > thanks,
              >
              > Edwin
              

  • Real-world experience with Exchange 2010 SP3 RU5+ and Powershell 4?

    The support-ability matrix for Exchange (http://technet.microsoft.com/en-us/library/ff728623(v=exchg.150).aspx) says Exchange
    2010 SP3 RU5+ and Powershell 4 are compatible.  But, there is very little actual discussion about how well that works. 
    I use Powershell extensively for mission critical and somewhat complex processes, with Exchange 2010 on 2008 R2 and AD access/reads/updates. 
    Can I get a summary of the caveats and benefits from someone who has actually done this in a
    real-world/production scenario (more than one server, managing from a separate non-Exchange server), and who has scripting experience with this configuration?  
    Also, how has this affected EMC operations?  
    As always thank you in advance!  

    I believe the matrix states that its supported to install Exchange into an environment where __ version of WMF is present.  Exchange 2010, launched from a Win 2012 server, reports version 2.0 when you call $host.  For example, calling the ActiveDirectory
    module from EMS on an Win 2012 server (ps 3.0) fails.
    I'll double check the extent of this scenario and get back to you.
    Mike Crowley | MVP
    My Blog --
    Planet Technologies

  • Real World Experience - TIPS - Pal to NTSC and Authoring DVDS on both

    I have just completed a DVD project.
    It was shot and edited in Pal and then authored to both Pal and NTSC DVDS ( Pal for Europe - NTSC for Japan and USA)
    Standards Conversion:
    I used both compressor and graham Nattress' excellent G (for Graham?) Standards converter.
    Compressor was a fast work around for a quick pal to ntsc conversion at ok quality. Since my footage is very demanding (lots of handheld heli shots, water housing footage)i imagine it was giving the converters a real work out).
    The Nattress G converter was much slower but the results were as close to perfection as i would imagine a software solution can get. Obviously the extra render time was working extra hard to get the conversion looking good.
    For anyone going down this path - For your master you just have to use the natress converter- thanks Graham - best $100 i have spent this year.
    MPG2 encoding
    i did this using compressor and it worked well.
    Here is the rub - i THOUGHT compressor was not doing my MPG 2 conversion at acceptable quality - it was actually the lack of quality out of compressors fast and easy solution for pal - ntsc that was giving me banding on moving objects and color issues.
    I suspect this may be where some people think they are coming unstuck with compressor. (go back and check you ntsc conversion relative to your original pal quality)
    I hope this saves anyone else in my situation some time.
    In summary - when working in PAL (ie you are shooting and editing in Europe or Australia)Your work flow should be - shoot and edit in Pal ,convert with Natress, encode with compressor (compressor 2.1 - i use 7.7 mbs max, 6.0 mbs av - 2 pass variable - use ac3 for audio) - which has actually been working well for me (despite various threads about issues with compressor)
    Hope this come up nicely for anyone doing a search. Thank you to everyone who has helped me on the way and good night!

    Rory85 wrote:
    Ok cool, thanks Stan.
    Re-doing the Encore stuff, as much as that would/will suck,   it's not my worst fear.. the thing that scares me the most is the thought of re-doing all of the Premiere work again..   If an NTSC Encore project can convert a PAL Premiere working file to a NTSC DVD, I can live with re-working the Encore stuff.. And yea, Dynamic link takes care of all of the chapter markers,   so that wouldn't be a big issue - it'd just be a matter of re-building the menus..
    But yea, if anyone has a definite yes or no answer on whether or not Encore can turn a PAL Premiere project into a NTSC DVD, I'd love to know..     The person I'm doing the work for doesn't want to spend thousands of dollars getting PAL dvds printed etc unless we absolutely know it's going to be a complete re-start to get it to NTSC..
    Thanks again,
    Rory
    Hi Rory.
    It's not possible in Encore or Premiere.
    The conversion process is a complex one & consists of changing the actual resolution from 720x576 to 720x480, which gives a very different shaped pixel as well.
    Then it also changes the frame rate of the footage from 25 to 29.97.
    The easiest way to do this with Adobe tools is to use Atter Effects, making sure that the AE render is locked to the duration of the composition - this is critical, or you will end up with footage at the wrong speed. All menus will need to be rebuilt.
    Sorry I cannot give you better news.
    As a general rule of thumb, if in doiubt - create & author in NTSC as there are almost no PAL setups that cannot output either pure NTSC or PAL-60, yet there are very few that can go the other way.....

  • Check Point 620 Wired Appliance: Real World Throughput and Experiences

    I am considering a purchase of the Check Point 620 wired appliance.   If you have used the 620, I would like to request some help:
    1. What real-world throughput have you seen with all blades active?  The internet speed at our site is 120 Mbps
    2. What has your experience been -- good and bad?  Anything notable?
    Thanks!
    This topic first appeared in the Spiceworks Community

    I am considering a purchase of the Check Point 620 wired appliance.   If you have used the 620, I would like to request some help:
    1. What real-world throughput have you seen with all blades active?  The internet speed at our site is 120 Mbps
    2. What has your experience been -- good and bad?  Anything notable?
    Thanks!
    This topic first appeared in the Spiceworks Community

  • PI Implementation Examples - Real World Scenarios

    Hello friends,
    In the near future, I'm going to give a presentation to our customers on SAP PI. To convince them to using this product, I need examples from the real world that are already implemented successfully.
    I have made a basic search but still don't have enough material on the topic, I don't know where to look actually. Could you post any examples you have at hand? Thanks a lot.
    Regards,
    Gökhan

    Hi,
    Please find here with you the links
    SAP NetWeaver Exchange Infrastructure Business to Business and Industry Standards Support (2004)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90052f25-bc11-2a10-ad97-8f73c999068e
    SAP Exchange Infrastructure 3.0: Simple Use Cases
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20800429-d311-2a10-0da2-d1ee9f5ffd4f
    Exchange Infrastructure - Integrating Heterogeneous Systems with Ease
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1ebea490-0201-0010-faad-a32dd753d009
    SAP Network Blog: Re-Usable frame work in XI
    /people/sravya.talanki2/blog/2006/01/10/re-usable-frame-work-in-xi
    SAP NetWeaver in the Real World, Part 1 - Overview
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20456b29-bb11-2a10-b481-d283a0fce2d7
    SAP NetWeaver in the Real World, Part 3 - SAP Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3172d290-0201-0010-2b80-c59c8292dcc9
    SAP NetWeaver in the Real World, Part 3 - SAP Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9ae9d490-0201-0010-108b-d20a71998852
    SAP NetWeaver in the Real World, Part 4 - SAP Business Intelligence
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1f42d490-0201-0010-6d98-b18a00b57551
    Real World Composites: Working With Enterprise Services
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d0988960-b1c1-2a10-18bd-dafad1412a10
    Thanks
    Swarup

  • New Enterprise Manager Book on Advanced EM Techniques for the Real World

    Dear Friends,
    I am pleased to say my first EM book can be ordered now.
    Oracle Enterprise Manager Grid Control: Advanced Techniques for the Real World
    [http://www.rampant-books.com/book_1001_advanced_techniques_oem_grid_control.htm]
    Please let your colleagues and friends and clients know – it is the first book in the world to include EM 11g Grid Control. It is a great way for people to understand the capabilities of EM.
    Oracle’s Enterprise Manager Grid Control is recognized as the IT Industry’s leading Oracle database administration and management tool. It is unrivalled in its ability to monitor, manage, maintain and report on entire enterprise grids that comprise hundreds (if not thousands) of Oracle databases and servers following an approach that is consistent and repeatable.
    However, Enterprise Manager Grid Control may seem daunting even to the most advanced Oracle Administrator. The problem is you know about the power of Enterprise Manager but how do you unleash that power amongst what initially appears to be a maze of GUI-based screens that feature a myriad of links to reports and management tasks that in turn lead you to even more reports and management tasks?
    This book shows you how to unleash that power.
    Based on the Author’s considerable and practical Oracle database and Enterprise Manager Grid Control experience you will learn through illustrated examples how to create and schedule RMAN backups, generate Data Guard Standbys, clone databases and Oracle Homes and patch databases across hundreds and thousands of databases. You will learn how you can unlock the power of the Enterprise Manager Grid Control Packs, PlugIns and Connectors to simplify your database administration across your company’s database network, as also the management and monitoring of important Service Level Agreements (SLAs), and the nuances of all important real-time change control using Enterprise Manager.
    There are other books on the market that describe how to install and configure Enterprise Manager but until now they haven’t explained using a simple and illustrated approach how to get the most out of your Enterprise Manager. This book does just that.
    Covers the NEW Enterprise Manager Grid Control 11g.
    Regards,
    Porus.

    Abuse reported.

  • Real World Item Level Permission Performance?

    I am considering implementing item level permission on a list we use. I've seen all the articles online cautioning not to do this with lists of more than 1000 items, but the articles seem to have little detailed information about the actual impact and what
    causes the performance issues. Additionally, they seem to refer to document libraries more than lists. I'd like some feedback about what might occur if we were to use item level security in our situation.
    Our situation is this: list of current ~700 items in a sharepoint list. Expected to grow around 700 items per year. The list has about 75 fields on it. We have 8 active-directory groups that have access to the list, based upon company department. Each
    item in the list can apply to one or more departments. The groups represent around 100-150 different unique users.
    We would like to use item level security to be set via workflow, to enable particular groups to access the item based upon their group membership. For example, if the list item is for the HR department, then the HR group has access. If the item is for IT,
    then the IT group has access (and HR wouldn't).
    That's it. There would be no nesting of items with multiple permission levels, no use of user-level ACLs on the items, etc.
    Thoughts about this configuration and expected performance issues?  Thanks for any feedback!

    Just an update for anyone who finds this thread:
    I converted our data into a test SharePoint list with 1500 rows. I then enabled full item-level security, with restrictions to hide data not created by the person.
    I then set individual permissions for each item that included 2-3 AD groups with different permissions--contribute, full ownership, etc, and 2-3 individuals with varying permissions. The individuals represented around 50 total people.
    After the permissions were set I then did a comparison of loading individual views and the full data set in Standard and Datasheet views, for both myself as an administrator with full list access and with several of the individuals who only had access to
    their designated items--typically 75-100 of the total list.
    The results were that I found no discernable difference in system performance from the user interface level while loading list views after the item level security was configured in this way. I understand this will vary based up
    hardware configuration and exact permission configuration, but in our situation the impact of item level security on a list of 1500 items had very little, if any, negative performance impact. Note that I didn't check performance at the database server level,
    but I'm assuming the impact there was minimal since the front-end user experience was unaffected.
    I expect we'll put this solution into place and if we do I'll update this post when we have additional real-world usage information.

  • LAP to WLC - Maximum RTT Delay Limits in the real World!

    Hi,
    I've seen the design related info that says the maximum RTT between the LAP and its WLC has to be less than 300ms.
    I'm working on a solution where a number of the APs will be across WAN links where the RTT are from 250ms to 323ms. Has anyone experience of the types of problems that occur when the 300ms is exceeded, or indeed is there a real world RTT value that we should work to rather than the stated values? Has anyone installed Aps where the RTT is greater than 300ms and is it reliable? I ask as I can find no references to this on Cisco.com or in the CAPWAP RFC and our competiors (in the bid) using non-Cisco products are very clear that this is not a problem for them.
    Any guidance much appreciated as usual.
    Thanks. Phil.C

    IMO
    Voice issues would be the biggest concern.  Part of the reason we keep a 300ms RTT is because you need to have 150ms RTT for voice to work cleanly.  While the AP may stay connected at 323ms, you will most likely have some voice issues.

  • Flex and large real world b2b applications

    Hi all.
    I am new to Flex. I am acting in an architect capacity to
    review the potential in Flex to become the client presentation
    layer for a classic ASP?SQL application. I am seeking a
    cross-browser, cross-platform, zero-installation,
    just-in-time-delivery, rich user experience web application client.
    I think I'm in the right place.
    My aim in creating this post is to solicit feedback on
    approaches and techniques on how to plan and execute a major-system
    re-write into Flex, what is in-scope and what is not, etc. With the
    Flex team putting the final touches into release 1.0, this might be
    considered a bit too soon to ask these questions, but worthy of
    response if Flex is to be take as anything more than
    ‘something to do with that flash games thing that my kids
    play on’.
    One of the key issues for my client company, which I believe
    will be typical for many in the same situation, is to retain
    current investment in the system by re-use of the business logic
    and DB access layers. Basically the back-end is not broken and is
    well prepared for generation of XML instead of HTML. What is
    considered weak, by nature of the web browsers poor user interface
    abilities, is the client side of the system.
    The company has a small, loyal and technically able workforce
    who are very familiar with the current system, which is written
    using classic ASP and SQL Server , HTML and JavaScript. The company
    is risk and runaway cost averse. It has held back from jumping into
    .Net for fear of getting into a costly 5 year revision cycle as
    .Net matures. The AJAX approach is another potentially fruitful
    client-only route but is likely to be a painful way forward as
    there is no major technology vendor leading the charge on
    standards. A Java approach would meet the user interface
    improvement needs but would require replacing or retraining the
    current workforce and a paradigm shift in the technology losing all
    of the middle-tier database logic.
    The ideal is a stable zero-installation web-client that can
    communicate with a server back-end consisting of the same, or
    slightly modified or wrapped business logic and db configuration as
    the current system and that could then run side-by-side during
    switchover.
    If this is possible then risk adverse organisations have a
    way forward.
    The problem is, that from several docs and articles on
    Adobe's web site, there seems to be some careful but vague
    positioning of the capability of Flex in terms of application
    complexity and depth. Also, the demo's that are available seem to
    be quite lightweight compared to real-world needs. These apps
    ‘seem’ to work in a mode where the entire application
    is downloaded in one-hit at user initiation. The assumption is that
    the user will be prepared to pay some wait time for a better UX,
    but there must be a limit.
    Question:: How does one go about crafting in Flex what would
    have been a 300-page website when produced in HTML? Is this
    practical? To create a download containing the drawing instructions
    and page-logic for 300 pages would probably cause such a delay at
    user-initiation that it would not be practical.
    There are many further questions that span from here, but
    lets see what we get back from the post so far.
    Looking forward to reading responses.
    J.

    You're absolutely in the right place (IMO)...
    Cynergy Systems can help you get started. We are a Flex
    Alliance partner,
    and have developed some extremely complex RIAs with Flex.
    Contact our VP of Consulting - Dave Wolf for more info:
    [email protected]
    Paul Horan
    Cynergy Systems, Inc.
    Macromedia Flex Alliance Partner
    http://www.cynergysystems.com
    Office: 866-CYNERGY
    "TKJames" <[email protected]> wrote in
    message
    news:[email protected]...
    > Hi all.
    >
    > I am new to Flex. I am acting in an architect capacity
    to review the
    > potential
    > in Flex to become the client presentation layer for a
    classic ASP?SQL
    > application. I am seeking a cross-browser,
    cross-platform,
    > zero-installation,
    > just-in-time-delivery, rich user experience web
    application client. I
    > think I'm
    > in the right place.
    >
    > My aim in creating this post is to solicit feedback on
    approaches and
    > techniques on how to plan and execute a major-system
    re-write into Flex,
    > what
    > is in-scope and what is not, etc. With the Flex team
    putting the final
    > touches
    > into release 1.0, this might be considered a bit too
    soon to ask these
    > questions, but worthy of response if Flex is to be take
    as anything more
    > than
    > ?something to do with that flash games thing that my
    kids play on?.
    >
    > One of the key issues for my client company, which I
    believe will be
    > typical
    > for many in the same situation, is to retain current
    investment in the
    > system
    > by re-use of the business logic and DB access layers.
    Basically the
    > back-end is
    > not broken and is well prepared for generation of XML
    instead of HTML.
    > What is
    > considered weak, by nature of the web browsers poor user
    interface
    > abilities,
    > is the client side of the system.
    >
    > The company has a small, loyal and technically able
    workforce who are very
    > familiar with the current system, which is written using
    classic ASP and
    > SQL
    > Server , HTML and JavaScript. The company is risk and
    runaway cost
    > averse. It
    > has held back from jumping into .Net for fear of getting
    into a costly 5
    > year
    > revision cycle as .Net matures. The AJAX approach is
    another potentially
    > fruitful client-only route but is likely to be a painful
    way forward as
    > there
    > is no major technology vendor leading the charge on
    standards. A Java
    > approach
    > would meet the user interface improvement needs but
    would require
    > replacing or
    > retraining the current workforce and a paradigm shift in
    the technology
    > losing
    > all of the middle-tier database logic.
    >
    > The ideal is a stable zero-installation web-client that
    can communicate
    > with a
    > server back-end consisting of the same, or slightly
    modified or wrapped
    > business logic and db configuration as the current
    system and that could
    > then
    > run side-by-side during switchover.
    >
    > If this is possible then risk adverse organisations have
    a way forward.
    >
    > The problem is, that from several docs and articles on
    Adobe's web site,
    > there
    > seems to be some careful but vague positioning of the
    capability of Flex
    > in
    > terms of application complexity and depth. Also, the
    demo's that are
    > available
    > seem to be quite lightweight compared to real-world
    needs. These apps
    > ?seem? to
    > work in a mode where the entire application is
    downloaded in one-hit at
    > user
    > initiation. The assumption is that the user will be
    prepared to pay some
    > wait
    > time for a better UX, but there must be a limit.
    >
    >
    Question:: How does one go about crafting in Flex what would
    have
    > been
    > a 300-page website when produced in HTML? Is this
    practical? To create a
    > download containing the drawing instructions and
    page-logic for 300 pages
    > would
    > probably cause such a delay at user-initiation that it
    would not be
    > practical.
    >
    > There are many further questions that span from here,
    but lets see what we
    > get
    > back from the post so far.
    >
    > Looking forward to reading responses.
    >
    > J.
    >
    >

  • Routing real world IP to computer on internal Network

    I have a OS X 10.4 server providing DHCP and NAT to a private subnet (192.168.2.X) and I was wondering if anyone had any experience in routing a real world IP to one of the computers on that network. I have several additional real world IP's given to me by the DSL service provider and I wanted to route traffic from one of them to one of the machines. Is this even possible?

    Well, I am familiar w/port forwarding but the situation is this:
    ISP gives me IP numbers A,B & C
    Xserve has IP A, provides NAT & DCHP service to network of macs.
    I want to be able to assign IP B to one mac on that network
    Is that possible or am I going to have to run a cable from DSL modem to the mac that I want to give IP B to, bypassing the server?

  • First of many Server 2003 to 2012 R2 Migration - Real World Advice

    Hey everyone,
    Performing my first legacy server migration and just looking for some good "real-world" pointers on how to approach it. Since I am planning on seeing a lot of these projects in the next year or so, I was wondering who is willing to share some approaches,
    tools, experiences, etc.
    I would especially like to know how everyone handles application compatibility!
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. ”

    Hello,
    please provide more details about the machines to migrate.
    Is this about a domain or workgroup?
    Which client OS run in the network?
    What about trusts to other domains?
    Applications you have to test each on the new OS versions and also have to contact the vendors BEFORE if that is supported.
    If old hardware is used drivers must be checked if they exist for the new versions.
    Licensing is a question, for example Remote Desktop has changed with RD CALs, they must be bought new.
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://msmvps.com/blogs/mweber/
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.

  • Real world battery run time of new 15" mbp retina?  (on-board graphics vs. video upgrade)

    Considering a new Mac Book Pro 15" retina to use as a dedicated DAW in the studio.  Primarily audio production work but would also do some video editing (Youtube videos, etc.)  Will be primarily in studio but 10 - 15% in the field.  (Hence the MBP rather than a desktop.)  I'm looking at the top-end 15" retina maxed-out but not sure if I need or want the Nvidia GT750M graphics card. 
    My concern is battery run-time with the graphics card.  I know Apple site claims 8 hours for the 15" retina.  But their footnote states that's on a pre-production unit without the graphics upgrade.  I've read multiple independent reviews of the version with graphics upgrade citing real world = 3 hour run times with "casual use".  Yikes!  
    So I'm looking for real-world run time for the latest generation 15" retina with graphics upgrade vs. the Iris Pro option.   Anybody doing audio / video work with these two options care to share your experience?  I don't think the DAW software (Logic Pro X) needs the graphics card and I'd be fine with the Iris Pro on-board graphics for audio work.  Video editing would of course benefit from the card (2mg graphics memory.)  It's only a $100 upgrade to add the card so it's not a financial question.   If real world is twice the run time without the graphics card I'll go for run time.  But if it's, "4 hours Iris Pro and 3 hours Nvidia card" I'll probably go for the processing power and suffer.
    Thanks!

    Considering a new Mac Book Pro 15" retina to use as a dedicated DAW in the studio.  Primarily audio production work but would also do some video editing (Youtube videos, etc.)  Will be primarily in studio but 10 - 15% in the field.  (Hence the MBP rather than a desktop.)  I'm looking at the top-end 15" retina maxed-out but not sure if I need or want the Nvidia GT750M graphics card. 
    My concern is battery run-time with the graphics card.  I know Apple site claims 8 hours for the 15" retina.  But their footnote states that's on a pre-production unit without the graphics upgrade.  I've read multiple independent reviews of the version with graphics upgrade citing real world = 3 hour run times with "casual use".  Yikes!  
    So I'm looking for real-world run time for the latest generation 15" retina with graphics upgrade vs. the Iris Pro option.   Anybody doing audio / video work with these two options care to share your experience?  I don't think the DAW software (Logic Pro X) needs the graphics card and I'd be fine with the Iris Pro on-board graphics for audio work.  Video editing would of course benefit from the card (2mg graphics memory.)  It's only a $100 upgrade to add the card so it's not a financial question.   If real world is twice the run time without the graphics card I'll go for run time.  But if it's, "4 hours Iris Pro and 3 hours Nvidia card" I'll probably go for the processing power and suffer.
    Thanks!

  • Making Effective Use of the Hybrid Cloud: Real-World Examples

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, and it was clear that NetApp's approach to hybrid cloud and Data Fabric resonated with the crowd. NetApp solutions such as NetApp Private Storage for Cloud are solving real customer problems.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that allows you to move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    Check out the following blogs for more perspectives:
    Microsoft Ignite Sparks More Innovation from NetApp
    ASR Now Supports NetApp Private Storage for Microsoft Azure
    Four Ways Disaster Recovery is Simplified with Storage Management Standards
    Introducing OnCommand Shift
    SHIFT VMs between Hypervisors
    Infront Consulting + NetApp = Success
    Richard Treadway
    Senior Director of Cloud Marketing, NetApp
    Tom Shields
    Senior Manager, Cloud Service Provider Solution Marketing, NetApp
    Enterprises are increasingly turning to cloud to drive agility and closely align IT resources to business needs. New or short-term projects and unexpected spikes in demand can be satisfied quickly and elastically with cloud resources, spurring more creativity and productivity while reducing the waste associated with over- or under-provisioning.
    Figure 1) Cloud lets you closely align resources to demand.
    Source: NetApp, 2015
    While the benefits are attractive for many workloads, customer input suggests that even more can be achieved by moving beyond cloud silos and better managing data across cloud and on-premises infrastructure, with the ability to move data between clouds as needs and prices change. Hybrid cloud models are emerging where data can flow fluidly to the right location at the right time to optimize business outcomes while providing enhanced control and stewardship.
    These models fall into two general categories based on data location. In the first, data moves as needed between on-premises data centers and the cloud. In the second, data is located strategically near, but not in, the cloud.
    Let's look at what some customers are doing with hybrid cloud in the real world, their goals, and the outcomes.
    Data in the Cloud
    At NetApp, we see a variety of hybrid cloud deployments sharing data between on-premises data centers and the cloud, providing greater control and flexibility. These deployments utilize both cloud service providers (CSPs) and hyperscale public clouds such as Amazon Web Services (AWS).
    Use Case 1: Partners with Verizon for Software as a Service Colocation and integrated Disaster Recovery in the Cloud
    For financial services company BlackLine, availability, security, and compliance with financial standards is paramount. But with the company growing at 50% per year, and periodic throughput and capacity bursts of up to 20 times baseline, the company knew it couldn't sustain its business model with on-premises IT alone.
    Stringent requirements often lead to innovation. BlackLine deployed its private cloud infrastructure at a Verizon colocation facility. The Verizon location gives them a data center that is purpose-built for security and compliance. It enables the company to retain full control over sensitive data while delivering the network speed and reliability it needs. The colocation facility gives Blackline access to Verizon cloud services with maximum bandwidth and minimum latency. The company currently uses Verizon Cloud for disaster recovery and backup. Verizon cloud services are built on NetApp® technology, so they work seamlessly with BlackLine's existing NetApp storage.
    To learn more about BlackLine's hybrid cloud deployment, read the executive summary and technical case study, or watch this customer video.
    Use Case 2: Private, Nonprofit University Eliminates Tape with Cloud Integrated Storage
    A private university was just beginning its cloud initiative and wanted to eliminate tape—and offsite tape storage. The university had been using Data Domain as a backup target in its environment, but capacity and expense had become a significant issue, and it didn't provide a backup-to-cloud option.
    The director of Backup turned to a NetApp SteelStore cloud-integrated storage appliance to address the university's needs. A proof of concept showed that SteelStore™ was perfect. The on-site appliance has built-in disk capacity to store the most recent backups so that the majority of restores still happen locally. Data is also replicated to AWS, providing cheap and deep storage for long-term retention. SteelStore features deduplication, compression, and encryption, so it efficiently uses both storage capacity (both in the appliance and in the cloud) and network bandwidth. Encryption keys are managed on-premises, ensuring that data in the cloud is secure.
    The university is already adding a second SteelStore appliance to support another location, and—recognizing which way the wind is blowing—the director of Backup has become the director of Backup and Cloud.
    Use Case 3: Consumer Finance Company Chooses Cloud ONTAP to Move Data Back On-Premises
    A leading provider of online payment services needed a way to move data generated by customer applications running in AWS to its on-premises data warehouse. NetApp Cloud ONTAP® running in AWS proved to be the least expensive way to accomplish this.
    Cloud ONTAP provides the full suite of NetApp enterprise data management tools for use with Amazon Elastic Block Storage, including storage efficiency, replication, and integrated data protection. Cloud ONTAP makes it simple to efficiently replicate the data from AWS to NetApp FAS storage in the company's own data centers. The company can now use existing extract, transform and load (ETL) tools for its data warehouse and run analytics on data generated in AWS.
    Regular replication not only facilitates analytics, it also ensures that a copy of important data is stored on-premises, protecting data from possible cloud outages. Read the success story to learn more.
    Data Near the Cloud
    For many organizations, deploying data near the hyperscale public cloud is a great choice because they can retain physical control of their data while taking advantage of elastic cloud compute resources on an as-needed basis. This hybrid cloud architecture can deliver better IOPS performance than native public cloud storage services, enterprise-class data management, and flexible access to multiple public cloud providers without moving data. Read the recent white paper from the Enterprise Strategy Group, “NetApp Multi-cloud Private Storage: Take Charge of Your Cloud Data,” to learn more about this approach.
    Use Case 1: Municipality Opts for Hybrid Cloud with NetApp Private Storage for AWS
    The IT budgets of many local governments are stretched tight, making it difficult to keep up with the growing expectations of citizens. One small municipality found itself in this exact situation, with aging infrastructure and a data center that not only was nearing capacity, but was also located in a flood plain.
    Rather than continue to invest in its own data center infrastructure, the municipality chose a hybrid cloud using NetApp Private Storage (NPS) for AWS. Because NPS stores personal, identifiable information and data that's subject to strict privacy laws, the municipality needed to retain control of its data. NPS does just that, while opening the door to better citizen services, improving availability and data protection, and saving $250,000 in taxpayer dollars. Read the success story to find out more.
    Use Case 2: IT Consulting Firm Expands Business Model with NetApp Private Storage for Azure
    A Japanese IT consulting firm specializing in SAP recognized the hybrid cloud as a way to expand its service offerings and grow revenue. By choosing NetApp Private Storage for Microsoft Azure, the firm can now offer a cloud service with greater flexibility and control over data versus services that store data in the cloud.
    The new service is being rolled out first to support the development work of the firm's internal systems integration engineering teams, and will later provide SAP development and testing, and disaster recovery services for mid-market customers in financial services, retail, and pharmaceutical industries.
    Use Case 3: Financial Services Leader Partners with NetApp for Major Cloud Initiative
    In the heavily regulated financial services industry, the journey to cloud must be orchestrated to address security, data privacy, and compliance. A leading Australian company recognized that cloud would enable new business opportunities and convert capital expenditures to monthly operating costs. However, with nine million customers, the company must know exactly where its data is stored. Using native cloud storage is not an option for certain data, and regulations require that the company maintain a tertiary copy of data and retain the ability to restore data under any circumstances. The company also needed to vacate one of its disaster-recovery data centers by the end of 2014.
    To address these requirements, the company opted for NetApp Private Storage for Cloud. The firm placed NetApp storage systems in two separate locations: an Equinix cloud access facility and a Global Switch colocation facility both located in Sydney. This satisfies the requirement for three copies of critical data and allows them to take advantage of AWS EC2 compute instances as needed, with the option to use Microsoft Azure or IBM SoftLayer as an alternative to AWS without migrating data. For performance, the company extended its corporate network to the two facilities.
    The firm vacated the data center on schedule, a multimillion-dollar cost avoidance. Cloud services are being rolled out in three phases. In the first phase, NPS will provide disaster recovery for the company's 12,000 virtual desktops. In phase two, NPS will provide disaster recover for enterprise-wide applications. In the final phase, the company will move all enterprise applications to NPS and AWS. NPS gives the company a proven methodology for moving production workloads to the cloud, enabling it to offer new services faster. Because the on-premises storage is the same as the cloud storage, making application architecture changes will also be faster and easier than it would be with other options. Read the success story to learn more.
    NetApp on NetApp: nCloud
    When NetApp IT needed to provide cloud services to its internal customers, the team naturally turned to NetApp hybrid cloud solutions, with a Data Fabric joining the pieces. The result is nCloud, a self-service portal that gives NetApp employees fast access to hybrid cloud resources. nCloud is architected using NetApp Private Storage for AWS, FlexPod®, clustered Data ONTAP and other NetApp technologies. NetApp IT has documented details of its efforts to help other companies on the path to hybrid cloud. Check out the following links to lean more:
    Hybrid Cloud: Changing How We Deliver IT Services [blog and video]
    NetApp IT Approach to NetApp Private Storage and Amazon Web Services in Enterprise IT Environment [white paper]
    NetApp Reaches New Heights with Cloud [infographic]
    Cloud Decision Framework [slideshare]
    Hybrid Cloud Decision Framework [infographic]
    See other NetApp on NetApp resources.
    Data Fabric: NetApp Services for Hybrid Cloud
    As the examples in this article demonstrate, NetApp is developing solutions to help organizations of all sizes move beyond cloud silos and unlock the power of hybrid cloud. A Data Fabric enabled by NetApp helps you more easily move and manage data in and near the cloud; it's the common thread that makes the uses cases in this article possible. Read Realize the Full Potential of Cloud with the Data Fabric to learn more about the Data Fabric and the NetApp technologies that make it possible.
    Richard Treadway is responsible for NetApp Hybrid Cloud solutions including SteelStore, Cloud ONTAP, NetApp Private Storage, StorageGRID Webscale, and OnCommand Insight. He has held executive roles in marketing and engineering at KnowNow, AvantGo, and BEA Systems, where he led efforts in developing the BEA WebLogic Portal.
    Tom Shields leads the Cloud Service Provider Solution Marketing group at NetApp, working with alliance partners and open source communities to design integrated solution stacks for CSPs. Tom designed and launched the marketing elements of the storage industry's first Cloud Service Provider Partner Program—growing it to 275 partners with a portfolio of more than 400 NetApp-based services.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    Dave:
    "David Scarani" <[email protected]> wrote in message
    news:3ecfc046$[email protected]..
    >
    I was looking for some real world "Best Practices" of deploying J2EEapplications
    into a Production Weblogic Environment.
    We are new at deploying applications to J2EE application servers and arecurrently
    debating 2 methods.
    1) Store all configuration (application as well as Domain configuration)in properties
    files and Use Ant to rebuild the domain everytime the application isdeployed.
    I am just a WLS engineer, not a customer, so my opinions have in some
    regards little relative weight. However I think you'll get more mileage out
    of the fact that once you have created your config.xml, checking it into src
    control, versioning it. I would imagine that application changes are more
    frequent than server/domain configuration so it seems a little heavy weight
    to regenerate the entire configuration everytime an application is
    deployed/redeployed. Either way you should check out the wlconfig ant task.
    Cheers
    mbg
    2) Have a production domain built one time, configured as required andalways
    up and available, then use Ant to deploy only the J2EE application intothe existing,
    running production domain.
    I would be interested in hearing how people are doing this in theirproduction
    environments and any pros and cons of one way over the other.
    Thanks.
    Dave Scarani

  • Real World Adobe Photoshop CS3 (Real World)

    Real World Adobe Illustrator CS3 (Real World) - Mordy
    Golding;
    Real World Adobe Photoshop CS3 (Real World) - David Blatner;
    these books are in the UPPER LEVEL than "classroom in a book"
    series ?

    > but the part about DNG has convinced me to dive deeper in it and give it a go
    When working in a Bridge/Camera Raw/Photoshop workflow, I tend to ingest the actual native raw files, do initial selects and gross edits and basic metadata work via templates and THEN do the conversion to DNG. I'll use the DNG as my working files and the original raws as an archive. I tend to do this more with studio shoots. I tend to use Lightroom when I'm on the road.
    When working in Lightroom first, I tend to ingest and convert to DNG upon ingestion (when in the road working on a laptop) while using the backup copyusually working on a pair of external FW drives one for working DNG files and 1 for BU of the original raws. Then, when I get back to the studio I make sure I write to XMP and export the new shoot as a catalog and import into my studio copy of Lightroom. Then I'll also cache the newly imported images in Bridge as well so I can get at the image either in Bridge or Lightroom.
    It's a bit of a chore now since I do work in Camera Raw a lot (well, DOH, I had to to do the book!) but I also keep all my digital files in a Lightroom catalog which is now up to about 74K...
    Then, depending on what I'll need to do, I'll either work out of LR or Bridge/Camera Raw...
    If I'm doing a high-end final print, I generally process out of Camera Raw as a Smart Object and stack multiple layers of CR processed images...if I'm working on a batch of images I'll work out of Lightroom since the workflow seems to suit me better.
    In either event, I've found DNG to be better than native raws with sidecar files.

Maybe you are looking for

  • How can I reload and repair the OS on a MC Mini sans DVD

    My Mac Mini has started showing the spinning rainbow and then not responding.  I'd like to wipe the HD and reinstall the OS that came with the system (no DVD).  Can I do this?

  • Horizontal menu bar - dynamic xml submenus - how?

    hi everyone! i have an xml file which i want to turn into a spry horizontal menu bar: <?xml version="1.0" encoding="UTF-8"?> <menu> <menuitem heading="Kennel" url="">      <submenu heading="Dogs" url="includes/html/dogs.html" />      <submenu heading

  • No invoice amount value in  2LIS_02_SCL

    I have created a PO, with an IR connected to it. The IR contains an invoiced amount - however it is not extracted in 2LIS_02_SCL in BW nor with the RSA3. The value is 0, Anyone that knows why? Any conditions that has to be met in order for the Invoic

  • Poss. to have different invoicing partys on 1 vendor for diff purch. orgs?

    Hello! We are running SRM 5.00. I am a rather techy person and was given the task to link the invoicing vendor to a purchasing vendor on purchasing org level. Everything I have found so far (table BUT050 and FM BAPI_BUPR_RELATIONSHIP_CREATE/CHANGE/GE

  • Best way to design jsp/template structure

    I am trying to create one page comprising of multiple pages without using frames. My thought was to create a template.jsp, which has the following includes: <jsp:include page="<%=header%>"></jsp:include> <jsp:include page="<%=menu%>"></jsp:include> <