Render farm between multiple os

Hi All,
I have a question about setting up a render farm for After Effects CC. Basically my main computer runs on OSX Maveriks, and I have two more Windows computers that I would like to help out rendering After Effects compositions. I have set up one of the machines with the After Effects CC render engine, and the Mac and other Windows machine have After Effects CC installed as part of the 2 machine license.
When I setup a watch folder render from the first windows machine to the second one, everything works fine, but what I want to do is setup the render from my Mac to both the windows machines who are monitoring a network watch folder.
Permissions are seemingly all in order, but if I set it up from the Mac both PCs give me the message: "Render control file not valid".
I have searched all over the Internet for a solution, but there seems to be nothing there. Also, I suspected it might have something to do with the way a Mac does line endings in files compared to a Windows machine, so I rewrote the RCF file on the Windows machine itself, making sure it was in the right format, but this did not work either.
Any help?
With warm regards,
Daniel

hello fastlane, for firefox you can use firefox sync to share bookmarks, passwords & history between your three OSs - [[How do I set up Firefox Sync?]]
& for keeping your mails synced it's possibly the easiest way to use a mail-provider that offers IMAP, which keeps the mails on the server so that your mail program on any OS has the same data when it's connecting...
(but we're not specialised on thunderbird here, you might head over to https://getsatisfaction.com/mozilla_messaging/ for more detailed questions).

Similar Messages

  • Render Farm Possible in Premiere?

    I multi-machine render in in AE all the time.  (For those who don't know, that's using several different computers to render the same project faster.)
    Can you do this in Premiere?  The closest solution I've come up with is rendering out and assiging frames to TIFF sequences on 3 computers like this:
    Computer 1:  Frames 1-2000
    Computer 2:  Frames 2001-4000
    Computer 3:  Frames 4001-6000
    But if computer 1 finishes first, it doesn't go on and help out its buddies, it just sits there like the last intern I had waiting to get fired and not looking around for anything to do, like move the 30 c-stands and sandbags to the grip truck instead of drinking pop and watching all of us work...
    Oh yeah, render farming.
    In AE you can click it to "skip existing files" (or somethign to that effect) so that if AE sees that frame 1001 is done already, it automatically skips to frame 1002, and on and on until every last frame is processed.
    Anything like this in Premiere that I've HOPEFULLY just been overlooking ?

    Here's the straight poop,
    I don't know if they'll ever get render farming up and running for anything outside of 3d effects, but the reason it hasn't been built yet is the same reason render farming took so long to develop in the first place: the workflow.
    Render farming doesn't output a single file.  Why?
    It uses the FRAMES and outputs a number of them from each machine in the farm, all to the same location (on any chosen machine).  However, these FRAMES are simply picture files.  That's right guys, just single picture files.  The same is true for video files really, that they are made up of nothing but picture files being rendered to the screen at super fast speeds, but they are all bundled together.  In compressed video, you lose some of the quality of the images in favor of saving speed and space, but with lossless codecs you are actually processing every frame (or every single picture).  This means that you'd have to add that processing step to the workflow for render farming.
    Steps originally:
    Comp A process these files as images from frame x to frame y and store in folder z
    Comp B process these files as images from frame v to frame w and store in folder z
    and so on for every machine
    Added steps for video rendering:
    comp A frames x to y scan and compress to file a
    comp B frames v to w scan and compress to file b
    and so on through each machine until all are finished
    finally: selected comp combine files a to (whatever)
    However, with this function you may get extra I-frames or B-frames, and you'll end up with jitter in the final product.  For this reason, the logic holds that instead of processing the video compression on multiple machines and combining files at the end, full video render farming should use a DATA PROCESSING BALANCE algorithm, allowing one machine to pass data between the processors of other machines while handling the file build on it's own.  Separating the file on the fly (as it is being processed) is a more useful tactic.  Start with I-frame A and I-Frame B and send the processing to Machine a, take I-Frame B and I-Frame C and send to machine B and so on and so forth.  This allows there to be I-frame overlap, and all the writing is still done with a single machine, while the processing is done across several.
    Of course, if you use the first set and change it so that the i, b and m frames over lap, then match them when putting the files together into one, you could, in theory, process it much the same as standard render farming.  The second method is only faster due to the fact that all the processing would happen on the fly, and the file would be all one file right away, removing the step of putting several files together, and it wouldn't matter which sections were finished first, how many machines there were, or how many frames overlapped, as the overlapping frames would simply overlap and all the others would fall into place.  The only bottleneck would be drive speed and fragmentation of the file, which could be a last step in finalization or an intermediate step along the way.  If you use a dedicated and allocated space for the storage, on the fly is the way to go; it also works if you simply run a defrag pass over the file when a section overlaps, thereby moving other sections forward or backward in storage so they line up properly; however, this could just as easily work at the end of the whole process.
    From a logical standpoint, it works.  From a programming standpoint, it's a lot of work, and it's hard on a system.  Personally, farming out a few effects and then wrapping them in a lossless file for use while editing is easy enough.  And the quality of the preview is amazing.  Rendering out to a single full video file is saved for last or for the end of the day when go home to my family; that way I don't have to ***** about how slow everything was.  I can just relax and let the PMS'ing wife yell about how nothing is going right, let the kids yell "he did this" and "she did that", and enjoy the fact that my computers are more obedient.  They do what I say without slamming doors and pouting.

  • After Effects CS6 Render Farm

    Hi All,
    Another Friday, Another test of new AE waters....
    I am busy reading up and trying to figure out how to use the three MAC's in the office to work together as a render farm. I have gone through the documentation and created a very simple scenario:
    1) Windows XP "server" (it has the shared disk that i use for storage - 2TB spinning disk)
    2) Created "Watch Folder" on XP Machine
    3) all three MAC's can see and access the shared folder
    4) Created a simple project - One 192MB video file (XMP) - applied keylight and matte choker
    5) I did a render on the MAC, using only itself and standard setttings to a TGA sequence, to another shared folder on the XP server - this process took 3 minutes and 7 seconds to complete
    6) I re-rendered the same project, with no changes on the mac, with the following settings:
    and send them all to the watch folder (project saved there, files collected there etc.
    7) once the project starts, it takes 20 minutes.
    Is there a specific reason why the time would increase so drastically rather than decrease?
    The macs all have 1 quad core CPU - 16GB memory and they are connected to a GB switch (all connecting at 1GBps) - If i check the tas manager on the XP pc, network utilization never goes above 25%.
    Thanks guys,
    Pierre

    Hi Pierre,
    I believe what you maybe seeing is a limitation on XP's file sharing.   For the sake of simplicity network rendering treats every core as a user.  So depending on which version of XP you are running it will either allow 5 or 10 users to connect to the computer at a given time.  If you have other sharing services enabled (such as printer sharing) any connections to these devices count against this number of users.  Given your other thread on where you discuss the rendering nodes,  if possible you may want to try using windows 7 (this allows 20 users) or one of the windows server products (max # of users is based on CAL's)
    Edited:
    A couple of other things came to mind if you are using watch folders.
    - If you are running anti-virus software on the server, ensure the watch folder directory and subfolders are excluded from scanning.
    - Turn off render multiple frame simultaneously' in the preferences on the render nodes.
    Hope it helps
    Message was edited by: StevRo

  • Render farm for DCP creation

    We are starting to get more and more requests to make DCPs for clients and we need to speed up the process. Currently doing a DCP through the Wraptor AME plugin takes around 20 hours for a feature length film. It's a bit faster if we do tiffs through AME and use OpenDCP for the rest but even then it's still taking a while. When we are doing work for festivals etc we are simply running out of time where we need to do dozens of films.
    Is there a way to use render farms to speed up the AME processing? And if so, any advice about setting it up?

    Rather than a farm, if you can find out from Adobe or the Plug-in maker how well the code threads you might consider a single computer with multiple CPU's llike the Xeon E7 v2 chips that can have up to 15 cores and up to 8 sockets.  Here is Harm's picture on his Tweakers Page.  To run a beast like this you have to us Windows Server OS which not officially supported by Adobe but apparently should work.  Of course the pricing is astonomical.each CPU chip (15 Cores 8-way) is $6841.
    I would suggest contacting Eric ADK and see if they can build you a Super Workstaion for your processing.  If it can be done they are the one's to do it

  • Building a "render farm"

    Okay, so after reading the threads here I have qmaster distributing the tasks between 2 G5 desktop machines and it's going much faster than with one. (it seems like its about 3x faster)
    Now I want to build a mini render farm to speed things up even more.
    My question is this. Do I NEED to use OSX servers and cluster nodes? The engineer at Apple said I would be better off but I can buy more dual 2ghz G5 desktops with my budget than servers and it would seem they do the same jobs.
    I could string 4-5 new G5's together as a start or 3-4 servers. Not to mention I would have to learn the server OS and everything before getting started.
    Any advice is appreciated

    PowerMacs are faster than Xserves. Just not as convienent space wise. You don't need xserves. Any mac will work, just don't have a mix of really slow machines and really fast, cause they slow machines will really slow things down. Let me qualify that: With compressor slow machines will really slow things down. Frame renderers like Shake and Maya would be fine when you adjust the segment size.
    Server OS is not needed. In fact, there is no difference between server and regular OS X, just the admin tools that come with server. You can run all the "server" services on the consumer version.

  • Is there any point in a G4 render farm?

    Hi all.
    I've been getting quite intrested in the idea of setting up a very small render farm laterly as I have the option of getting some old G4s for free.
    I currently have a 2.0Ghz DP G5 as my main mac running FCP2 and Shake and was already grabbing a G4 466 digital audio to set up as a file/print and back up server running tiger server, a postscript RIP and retrospect. I have the possibility of getting atleast one more G4 466 DA and maybe a couple of G4 400 AGPs with Gigbit ethernet cards. As the main use would be farming out DV to mpg2 compression via compressor and shake renders (though not large shake stuff to begin with as very new to this) is there any point in looking into this or am I going to need to lots of G4s to make a real improvement in render times.
    Also is RAM a major consideration. For example if I got 4 G4s with say 512Mb ram would it be better runing all four or using three but nicking the fours ram to give more ram in the three machine set up?
    Any advice would be appreciated
    Cheers
    Steve

    I vote no. G4 isn't the problem so much as 400mhz is. The ram shouldn't be a problem.
    But you could test if you set up Compressor on one G4. Compress a job. Compress the same job with the same settings on the G5.
    If the G5 is 4 times (or even 3, since time is spent sending the render data over the network between the G4's slow system busses) as fast the single G4, a render farm will not help you.
    Using the G4 and G5 together in the same cluster does not work because Compressor looks at all processors and divides the job into twice as many segments. In this case, the G5 finishes it's two segments years before the G4's and then sits around waiting. You could try splitting the G5 into 4 instances, but then you have 8 slow processors attacking your job. Not very efficient.
    Good luck though. You should test the one G4 just so you know. Report back if you do.

  • AE render farm cross platform

    hi all,
    just wondering i have set up a AE render farm on pc (2014 v13.1.1), i know it can render PC AE renders if a pc sends a job to it but can it render mac render jobs?
    rob

    After Effects project files are cross-platform compatible. Things to watch out for are that file paths are valid, and that fonts and plug-ins are installed and working. You should open the project on a Windows machine to check for any errors, especially with fonts. Some fonts may have different names between platforms.
    When using Collect Files, the source file paths are set relative to the project so this is usually not a problem. But you must make sure that the output file paths are also valid; using the Change Render Output To option in the Collect Files dialog can help with this.
    More detail here:
    After Effects Help | Automated rendering and network rendering

  • How to achieve no-downtime solution deployment on farms with multiple WFEs and LB

    Taking SharePoint Solution Deployer, my opensource PowerShell deployment script, to the next level,
    Bill Simser got me the idea of making the deployment even more smooth on farms with multiple WFEs and load balancer in order to achieve a no-downtime deployment
    The basic idea is to deploy the solutions on each WFEs one-by-one by
    1. Taking one WFE offline
    2. Installing the solution with the -local switch
    //Solution deployment
    Install-SPSolution -Identity <solutionname>.wsp –GACDeployment –CASPolicies –Local
    // Solution upgrade
    Update-SPSolution -Identity <solutionname>.wsp -LiteralPath LocalPathOfTheSolution.wsp -GacDeployment -Local
    3. Run post-deployment actions on the WFE (ie. restart services, recycle apppools or IIS reset, warmup server), which my script already does for each server
    4. Take WFE online again
    5. Repeat step 1-4 for all other WFEs
    I am struggling with three things here:
    1. The whole deployment process could be quite risky when something goes wrong in between. And in order to roll back I would require the original solution if it was already deployed before (which I can back up of course before I replace
    it)
    Anything which involves changing the content dbs should of course be done after the solutions is deployed to the whole farm, so this should not hurt in this case.
    Anyway MSDN says that the "DeployLocal" method (which I assume is the same as the -local switch in PS ) should be only used
    for
    troubleshooting purposes.
    So it would be great to hear about anyones experiences with it
    2. As there can be different types of load balancers (hardware, software) which might not be configurable through my script I assume that taking out the WFE from the the load balancer may not always be possible.
    So I thought about just taking the server offline.
    I haven't found an option yet to take only one server in the farm offline (without removing it from the farm of course), so maybe I miss something. Any ideas?
    3. Before taking a single WFE offline, I would like to assure that this server does not have any open sessions, operations of users ongoing. Unfortunately I found only the possibility to quiesce the whole farm, but not a single
    server. Am I missing something?
    Appreciate any ideas which might point me in the direction to solve the overall goal!
    SharePoint Architect, Speaker, MCP, MCPD, MCITP, MCSA, MCTS, Scrum Master/Product Owner
    Blog: www.matthiaseinig.de, Twitter:
    @mattein
    CodePlex: SharePoint Software Factory,
    SharePoint Solution Deployer

    Hi Mike, 
    unfortunately not. I tried several different approaches but didn't really success reliably with any of them. So eventually I gave up on it.
    Interesting idea though that Eric Hasley is commenting on the blog post you mentioned.
    "There is another approach that has worked for me in the past.  Because the deployment to each server is handled through a timer job,
    by stopping the timer service in a controlled fashion you can rollout your solution without incurring any user outage."
    It could work like that (in theory).
    Stop the SPTimerV4 on all servers in the farm apart from one.
    Take out the one to deploy to from the NLB
    Wait until it has no connections
    Deploy the solutions on it in the ordinary way (eg. with my
    SharePoint Solution Deployer ;))
    Put it back into the NLB and take the others out
    Wait until they have no connections left
    Activate the timer service on the others servers and let them deploy
    Put them back into the NLB
    No clue if this is actually working and you still have the problem with the NLB, so it could take a while.
    Also I am not certain what happens in state 5 if users use different versions of your solutions at the same time (old version on the remaining open connections, new version on the updated server)
    I do not have a suitable farm at hand to play with it though, so can't test it.
    Cheers
    Matthias
    Matthias Einig, CEO, SharePoint MVP
    Blog: www.matthiaseinig.de, Twitter:
    @mattein
    Projects: SharePoint Code Analysis Framework (SPCAF),SharePoint Code Check (SPCop),
    SharePoint Software Factory,
    SharePoint Solution Deployer

  • Cross dissolve transition between multiple video tracks?

    I'm wanting to cross dissolve between 2 clips, with each clip consisting of 2 or 3 video tracks (one clip is a still image in track v2, sitting above a color matte in v1; the other clip is motion footage in v1, with another cropped motion image in v2 and v3). When I put cross dissolves between the 2 clips (ie 2 dissolves, one for video track v1 and one for v2), the dissolve is "messy", showing the cropped images in v2 and v3.
    To make the dissolves "clean" I can render and export the multiple-track clips, but then I have to add handles to allow the dissolve, and then it takes time to resync each clip, which is very annoying!
    Is there a technique for making cross dissolves between multiple-track video clips?

    Any other suggestions? <</div>
    Practice and logical deconstruction.
    If your original clip is 10 seconds and your nest is 10 seconds you must trim off 2 seconds of the nest to perform a 2-4 second dissolve. If that pulls too much of your original clip out of the timeline, you must add two seconds to the original clip so the nest is 12 seconds long so you can trim off the 2 seconds requried to perform the dissolve.
    An easier way of accomplishing this is to stack the two nests and apply a keyframed opacity ramp to the upper track. This will help you visualize what you're doing wrong and how to add more handles to the upstream clips.
    bogiesan

  • Hardware recommendations for render farm?

    Hello,
    what would you recommend for a small render farm for mixed AE projects (mostly HD content, short projects with only a few seconds and layers and bigger ones with multiple HD layers and up to 15min or longer.)
    is it better to have more cores (AMD opteron servers with 48 cores for example) or better more performance per core but less cores (Intel Xeon servers)?
    And how much memory would be optimal?
    What can you recommend for network connection to the shared storaged, how fast shall this be?

    I like Harm's analogy better then mine. One prize cow vs. 20 regular cows. The imagery is very clear. In a production environment, where volume counts, more is better then a single top notch one. A single cow can only produce so much mik per day, no matter how extraordinary it is.  But 20 can theoretically produce 20 times more (practically, maybe  17 to 18). Quantity, not quality. I'm not saying that you should get crappy hardware. Just good hardware, not high performance ones.
    As you know, it's not because you buy a computer that costs twice as much, that you'll get twice the processing power. First of all, forget dual CPU computers for a render farm, it's not worth it. Get a regular quad core. As you compare prices vs. performance (do a chart), get the CPU that gives you the best bang for the buch (just before the price curve goes up).
    I personally think 32GB is overkill in a render farm environment. 16GB is way more then enough. Remember that you're not doing a RAM preview farm (if such a thing exists), but rather, fractioning the render process to various computers. While 16GB is good, 2GB isn't. But 8GB can be an nice compromise. Remember, you're trying to drive the cost down of a computer, so you can buy many computers. But RAM is cheap, so, best bang for the buck, 16GB can be possible. Then again, it depends on other overall factors to keep the price for the computer down.
    Network speed. Yes you could get 10GB NICs and a 10GB switch. But that can drive the price up substantially. If you can get a good price on them, great. If not, stick to 1GB. Get a motherboard that has 2x 1GB NICs on board, so you can aggregate the network connections together to get 2GB. You could eventually add a quad 1GB PCIe card down the road. The computer that has the project assets should have an aggregated quad (or more) NIC inside. You want to be able to push the media as fast as possible to the render farm.
    Other things to consider... or rather, not consider. Graphic card is a non issue on a server farm. AE is all about CPU. Some mobos come with on-board graphics. Usually, it's cheaper, and consumes less power. You also want to drive the energy costs down when you multiply by X number of computers. Just get a KVM solution that will be able handle X computers. You don't need a RAID on render farm computers. A regular hard drive will do. I'd even go with a green drive that consumes less power.
    And depending on how many computers you have, to reduce the footprint, you might want to get low profile rackmount casings and put them in a rack.  Plus think of putting it in it's own room, because it'll be very noisy!

  • Mac OS X Render Farm

    I need to setup a render farm that supports the following applications:
    Adobe After Effects, Final Cut Pro, Motion, Shake, Houdini, Modo, Maya, Cinema4D and Compressor.
    What would I need?
    Does Xgrid support these apps or is the Sun Grid Engine a better bet?
    Sun Engine Grid
    http://gridengine.sunsource.net/
    XSan is a must unless ZFS is a better option not too sure if it would be compatible though. What RAID arrays would work best? Can I use the Sun StorageTek 6140 array with the XServe being that XRaid is no defunct (Promise RAID looks iffy, not to mention ugly).
    How would I manage the Queues (Can I run multiple Queue Applications to support different apps i.e. Apple Qmaster and Sun Grid Engine)?
    RUSH http://www.seriss.com/rush/
    Dr. Queue http://drqueue.org/cwebsite/
    Qube http://www.pipelinefx.com/
    Anyone with experience.
    Thanks,
    Juan

    Well I mentioned ZFS mainly because the Sun StorageTek Array supports ZFS and XSAN looks somewhat similar to ZFS in that the storage is pooled. I imagine Apple is planning on replacing XSAN with ZFS in the not too distant future, if so Sun Storage looks great!
    Regarding Qmaster, I'm going to investigate further. Its supposed to work according to Apple's advertising:
    Clusters for encoding and rendering
    Q Master
    Qmaster uses the processing capacity of your network computers for a wide range of tasks, including encoding and rendering for motion graphics produced by applications *such as Apple Shake, Adobe After Effects, and Autodesk Maya.* Configure Qmaster clusters to run jobs for encoding, rendering, or both, and change the configuration in a few simple steps. You can output files faster than ever by configuring clusters to take advantage of multiple processors, or “services,” as well as multiple computers. For even faster throughput, configure Qmaster to manage individual processors in each computer, turning any Mac Pro into a virtual render farm. To further optimize your workflow, create clusters to be used for specific jobs, users, or applications.
    http://appleclub.com.hk/finalcutstudio/compressor/encoding.html
    Modo seems to have network rendering built in as well. Still waiting for the Mac version of Houdini to test it out. I've also have been playing with Real Flow.
    I'll keep updating the post if I find anything.
    Thanks,
    Juan

  • New Mac Owner - Render Farm Question

    I am new to the mac world. I have purchased the macbook pro, 17" with 2 gig ram and 7200 rpm HDD. I am using the machine for video editing, effects, and DVD authoring. I have read through several files about compressor2 and Qmaster and their ability to improve render time when producing a video file. Being new to the mac world, I do not have multiple machines to link together via network to utilize these amazing applications. I do, however, have several PC's that are high speed, dual processor, with loads of ram (previous video editing systems). Can a PC run the Qmaster and/or compressor software and allow for the render farming? Any input and advice is greatly appreciated.

    By default you CANNOT run anything called "QMaster" or "Compressor" on a Windows PC.
    BUT!
    There is something buried deep in the Apple QMaster documentation that describes how to use Terminal on your Mac and the command line to set up any Unix or Linux machine as an external [what Apple calls,] "EXTENDED NODE."
    But to do this, you have to have at least ONE extra Mac to use as the "INTERMEDIATE NODE."
    How it works is that you use Terminal and command line codes to make the EXTENDED (non-mac) nodes talk to (and offer their processors to) the INTERMEDIATE node, which then says to the Cluster Controller, "Hey! I've got more SERVICES NODES to offer you."
    So, in a manner of speaking, this process tricks the Cluster Controller into thinking that the Intermediate node has more services to offer than it really does, because the Intermediate node is kinda "subleting" the processors of the Extended Node(s).
    So, it's up to you to decide if it would be worth it to install Unix or Linux on all those PC boxes you have, but it would be possible, provided you purchased another mac to use as an Intermediate Node. Perhaps an Intel Mac Mini would do the trick.
    Good Luck.

  • How do I use the time capsule to share itunes music between multiple apple devices? Also, is it possible to control the music on one device using another, and how do you set this up?

    How do I use the time capsule to share itunes music between multiple apple devices? Also, is it possible to control the music on one device using another, and how do you set this up?

    unless i'm missing something, i think you got mixed up, this is easy google for walk throughs
    i'm assuming this is the new 3tb tc AC or 'tower' shape, if so, its wifi will run circles around your at&t device
    unplug the at&t box for a minute and plug it back in
    factory reset your tc - unplug it, hold down reset and keep holding while you plug it back in - only release reset when amber light flashes in 10-20s
    connect the tc to your at&t box via eth in the wan port, wait 1 minute, open airport utility look in 'other wifi devices' to setup the tc
    create a new wifi network (give it a different name than your at&t one) and put the tc in bridge mode (it may do this automatically for you, but you should double check) under the 'network' tab
    login to your at&t router and disable wifi on it
    add new clients to the new wifi network, and point your Macs to the time machine for backups

  • I setup messaging on my verizon, however I have multiple lines on the account and it is showing messaging only for one phone line. Is there a way to toggle between multiple lines on the myverizon site?

    How do I toggle between multiple phone lines while on the myverizon website in regards to messaging?

    The Account Owner can see the call/text logs for all lines on the account, but each line needs its own My Verizon account for messaging and backups.  Those lines will be listed as Account Members and will have limited info. available to them regarding the account.

  • HT1198 If I share an iPhoto library between multiple users, will the Faces, Events, and Places be automatically usable by all users, or will each user have to tag all the photos (e.g. if a user tags a face, will a different user have to do it in their own

    If I share an iPhoto library between multiple users, will the Faces, Events, and Places be automatically usable by all users, or will each user have to tag all the photos (e.g. if a user tags a face, will a different user have to do it in their own iPhoto application??

    Have you read this Apple document regarding sharing a library with multiple users: iPhoto: Sharing libraries among multiple users?
    OT

Maybe you are looking for

  • Variation on itunes 6.0.1 (3) problems

    My new 6.0.1 (3) works just fine in all respects EXCEPT that the program crashes when I try to connect to the music store, or, to update a podcast. I haven't had the whole computer crash like others have, and I haven't lost the library, as have other

  • Safari won't load/can't find url or url invalid

    I'm having issues with Safari either unable to load a page, saying url is invalid or ? in boxes where pics are supposed to be. It's the worst for pages on Apple.com and Facebook.  Below are the errors/warnings system log and I apologize but I'm not s

  • A question about iwork and microsoft office

    what's the different between I download iwork on app stores and buy iwork software? and one more question about Pages in iwork is that  can Microsoft office word open the document create by Pages ? Thanks.

  • My hp pav p7-1240 doesn't output to my monitor after i wake it up

    I bought a new desktop (HP Pavilion P7-1240 ) just over a month ago because my previous one died. I am still using my old monitor that works fine. The last couple days however, I have woken my computer up and it seems as though it will not output any

  • Error logs message " HTTP3090: Failed to create logging thread " ?

    hi, i need your help. i have "sjs web6.1sp8" on hp-ux machine below message suddenly appear at load testing. [Jul/2010:03:01:52] failure ( 6477): HTTP3090: Failed to create logging thread (Insufficient resources) [Jul/2010:03:04:53] failure ( 6477):