Compressor 4 cluster problem

I have just set up my MacBook Pro and iMac with Compressor 4 as a cluster. Everything looks fine, but when I send the file out to render it fails with an error "error reading source.....no such file or directory". The file renders fine when I don't tick the "this computer plus" box.
I've followed the path Compressor is using and it points to the alias in the events folder which goes from the "original media" folder back to the actual location of the .mov files. Sure enough, in Finder, OS X tells me the alias has failed. However, when I try to fix it and browse to the original file, the "OK" button lights up but nothing happens when I click it. This seems to be the case for ALL .mov files in every project I have edited.
The weird thing is that FCP X can obviously see all of these files as everything works fine - I can edit and render with no problem. The issue only arises when I choose "this computer plus" or pick a cluster.
So it looks like the aliases do point to the correct files but cannot be accessed directly from the Finder or when Compressor 4 looks for them in cluster mode.
I hope that makes sense.
Hopefully someone has seen similar behaviour.
Thanks,
Jack.

Hi Studio X, not sure how to do that but I just worked it out. It was (as you seem to have worked out) the alias that was the clue.
I had put together these FCP X projects on a different USB drive. As I wanted to be more organised I copied over all the projects to a new 1TB USB drive which I'm only using for storage and editing. The good thing about this is that I can simply remove the drive and plug it into a different Mac and FCP X sees everything - events, projects, original files. However, there must be some reference to the name of the old USB drive as part of the alias which Compressor doesn't like when in cluster mode. I started a quick project on the new drive and Compressor 4 worked as a 3 machine cluster with no problems.
I can't quite understand why FCP X finds the original video at all if it is looking for the drive name and not just the path to the files, but it seems not to care.
Anyone have any ideas about this?

Similar Messages

  • Compressor Cluster problem is memory RAM related?

    Hi guys,
    I was using my quickcluster with 2 instances in my mbp until yesterday. So, today after I upgraded my RAM to 4 gigs, Compressor didn't show the "My disk cluster" in the list anymore. I upgraded to the latest version of everything and it still doesn't work.
    So, has anybody thought that it might be some kind of RAM Memory limit? 2 gigs max?
    Looking forward to hear what you think.
    Eduardo Serrano.

    and my problem is that when I use parallels desktop it, I copy files to external hard memory RAM from 11GB goes to 16-100 mb. Can anyone help me.

  • Compressor Cluster problems

    Hi there,
    I've been trying to use compressor to export some movies but I can only choose a cluster that I created, which includes another computer that's being used. The option "This computer" isn't available!
    I think it has something to do with Qmaster (which right now is turned off) but when I try to turn it on (start sharing) I get a message "qmasterd not running. Unable to start services because qmasterd is not running" I tried rebooting the computer, but nothing. All the same.
    I would appreciate if anyone game a hand with this!
    Thanks,
    Marcos H.
    Compressor 2.0.1

    Thanks guys, this has baffled me for some time.
    I got it working, but something is very wrong. I sent a 9 minute MPEG-2 file to Compressor to have compressed to H.264. (H.264 compression is what I would gather many of us are using Comp for in the first place).
    I know Compressor can do H.264 comp at about 1.43:1 from tests. So a 9 minute clip should take, 6 minutes or so, and with Q Master it should be even faster. Well, Comp reports 8-9 hours. What the heck is going on.
    I have taken MPEG Stream Clip and done the same H.264 output and it takes about 6-7 minutes. Something is up with Comp 3
    Message was edited by: macguitarman

  • Ironport c160 cluster problems

    Hi!
    I have two Ironport C160 in cluster mode, tonight one of them has stopped working, and I can not access this on, but it responds to ping.
    In the system log I found only the following line:
    Mon Mar 12 15:30:39 2012 Warning: Error connecting to cluster machine xxxxx (Serial#: xxxxxx-xxxxxx) at IP xx.xxx.xxx.x - Operation timed out - Timeout connecting to remotehost cluster
    Mon Mar 12 15:31:09 2012 Info: Attempting to connect via IPxxxxx toxxxxxxxx port 22 (Explicitly configured)
    My version is:6.5.3-007
    What I can log to find the cause of the problem?
    How I can find out what the problem?
    How can you solve?
    Thank you very much

    Well, "queuereset" is not a valid command, what you mean is "resetqueue", which I would strongly not recomment  to use without having a very good reason.Because this command removes all messages from the workqueue, delivery queues, and quarantines. There are usually less destructive ways to fix a cluster problem.
    BTW, version 5.5 has long been gone, so we won't need to reference any bugs from there any more.
    Regards,
    Andreas

  • SPF is not supported SCVMM cluster problems, when repairing ?

    SPF is not supported SCVMM cluster problems, when repairing ?

    See:
    *http://forums.sdn.sap.com/thread.jspa?threadID=2056183&tstart=45#10718101

  • Leopard - QMaster and Virtual Cluster problem

    Hi guys,
    Up until yesterday I had my MacPro Octo running under 10.4 where I did succesfully set up a Virtual Cluster using 4 instances for Compressor. It worked as a charm and my MacPro was doing it's job perfectly.
    Today, I made a bootable backup of my 10.4 install and installed 10.5 using the erase and install options ( clean install ). I installed all my software again and tryed setting up my Virtual Cluster again, using the same settings I had under 10.4. Sadly I can't seem to get it working.
    In the QMaster Preferences pane, I have the QuickCluster with Services option checked. For the compressor entry in the Services I have the Share Option checked and used 4 isntances for the selected service. The Quickcluster received a descent name and the option to include unmanaged services from other computers is checked.
    I have the default options set in the Advanced tab, ( nothing checked except log service activity to log file and Show QMaster service status in the Menu Bar). I then started the Cluster using the Start Sharing button.
    Now I open u Compressor and add a file to process ( QT encode to iPod ), but when I hit the Submit button, my Virtual Cluster doesn't show up in the Cluster Dropdown. If I now leave the Compressor GUI open for 5 minutes, it will eventually show up in the list, and I can pick it. Sadly, picking it from the list at this point and hitting the Submit button makes Compressor Hang.
    I checked my logs, but the only thing concerning Compressor I could find is this :
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:41 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'], server [tcp://10.0.1.199:49167]
    4/12/07 20:12:41 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'
    4/12/07 20:17:55 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488705.075513 218488675.075652) 1'], server [tcp://10.0.1.199:49167]
    I tried Stopping and then Restart Sharing and I noticed the follwoing entries in my log :
    4/12/07 20:23:26 compressord[210] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[211] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[213] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterca[269] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterqd[199] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:27 QmasterStatusMenu[178] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489009.603992 218489007.604126) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'
    Batch Monitor immediately detects the cluster being active again, but Compressor doesnt, leaving me only This Computer available in the Cluster drop down when submitting a batch.
    In my Activity Monitor, I notice that CompressorTranscoder is not responing ( the 4 CompressorTranscoderX processes are fine ) and the ContentAgent proces isn't responding neither.
    Does anyone have any clue on what I could check next or how I could fix my problems ?
    Thanks a lot in advance,
    Stefaan

    Bah, this is crazy, today it doesn't work anymore. Yesterday my cluster was showing up in the Dropdown window, and I could submit a batch to it, and it got processed over my virtual cluster.
    Today, after finishing the second part of my movie, I tried it again. I didn't change anything to my settings, my machine hasn't even rebooted (just recovered from sleep mode) and my cluster isn't showing up at all anymore. Even the Qmaster menu doesn't show it
    Guess, I'll have to wait out until it appears again, or try a few things out

  • Multiple Compressor/Qmaster problems...clusters and batch monitor launch

    Hi All,
    I am continuing to have problems with Compressor and Qmaster. My original problem was that I was trying to create clusters to speed up my workflow. My computer is the only computer in the network. The issue came up when I created a cluster and it would work fine if I dragged in a QT from outside FCP to process in compressor, but if I tried to export a QT from FCP to compressor it would fail. Yesterday as I was about to leave work I was trying to export stuff through compressor and the batch monitor wouldn't launch. Sometimes I could get it to, but the QT it was exporting would disappear once it was processed.
    I've deleted the compressor/FCP prefs and I also tried reinstalling compressor/qmaster. I trashed all the files I was supposed to and it couldn't get rid of all of them because it said they were still in use. AHHH!!!! I'm getting a little frustrated. I called apple and in so many words, they said, "Well, that's what happens with Studio 2. Try reinstalling it."
    Help! I'm on FCP studio 2.

    As mentioned in other threads, virtual clusters are very tricky to setup properly and, unless you're doing a lot of H.264 encoding there's almost no benefit in doing it.
    I highly suggest that you (and anyone struggling with VC's) pick up a copy of "Compressor 3 Quick Reference Guide, Brian Gary" and get a solid understanding of the environment VC's create - and what they're really good for.

  • August Patch Cluster Problems

    Has anyone had the following issue after installing the latest Patch Cluster?
    After a reboot I get
    couldn't set locale correctly
    To correct this I have to edit /etc/default/init
    and remove
    LC_COLLATE=en_GB.ISO8859-1
    LC_CTYPE=en_GB.ISO8859-1
    LC_MESSAGES=C
    LC_MONETARY=en_GB.ISO8859-1
    LC_NUMERIC=en_GB.ISO8859-1
    LC_TIME=en_GB.ISO8859-1
    If I then create a flash archive and use this flash archive the jumpstart process then puts the locale info back and the problem appears again.
    It's not critical as I don't need to be on the latest Patch Cluster but would wondered if I'm the only one having issues?

    If you open the directory in CDE's file manager, right click on the zipped file and select unzip. The cluster will be unzipped to a directory structure called x86_recommended or something of the sort. Change to that directory to run the patch cluster install script. The patch script is looking for that directory structure.
    Lee

  • Compressor Cluster - Error message when attaching .scc caption files

    Hello,
    We have a 3 XServer Cluster controlled by a 4'th XServer (Our FCServer machine). My workflow is:
    Source Video: 1920X1080 ProRes Video (28:30min)
    Resized to 640X360 ProRes LT (also de interlaced and some black restore and sharpening applied here)
    Encoded 640X360 to H.264 at 750Kb.Sec - .scc files defines in "Additional Information" tab in Compressor at this point.
    This job is submitted to the cluster. My submitting machine as well as all cluster machines are all connected to the same fiber network. All files are on the same XSAN.
    I am getting the following error message. I get it after it has tried to encode the video:
    Status: Failed - 5x HOST [fcsqm2.local] error: Failed to add CC to movie: -50
    note: fcsqm2 is one of the encoding machines in the cluster.
    I can't seem to find any answers via google. Anyone got any suggestions where I can look? Any ideas?
    Thanks a lot!
    Nathan

    {Ctrl + Shft + J} - any messages in the Error Console, relating to that?

  • FCP or Compressor Audio Problem - Please Help!

    Please Help!
    I have completed my wedding project in Final Cut and it includes 4 extra stereo tracks (music etc) on the Final Cut Time line, therefore 5 stereo tracks in total. Music was just imported into the browser and then placed according in the time line.
    When I export the whole project to Compressor, the video compresses fine into the DVD best quality format but the audio compressor has only compressed a SEGMENT of the FIRST stereo tracks?!
    Totally Bizarre.. My client is waiting for the DVD and I’m at total loss!!
    Desperate workaround (but want to avoid) If I have too i tried rendering the whole movie to a QT movie, then importing that into DVD Studio and just using the audio from that (which is ok) in my DVD project! But I don’t want to do this as I have not properly compressed the audio along with the video in the DVD best settings preset in compressor.

    No I don't think so.. I use Switch to convert the .whatever files into .aff compliant files. Something has obviously go a miss somewhere during my edit, I just cannot think where. I did ask before if there was anyway to check for compression markers as the orginal problem seemed to be rendering just only a section of the audio. I did a clear markers control+' which clears markers on the timeline but not sure where that leaves compression markers. As I imply, I didnt deliberately saet any compression markers but maybe some crept in there?
    Also I found it odd why SP pro originally came up with just "Error" when I tried to export the project to it? I tried a fresh sequence with a small amout of video on it and it exported fine into Soundtrack Pro (SP).
    Additional question: What is the purpose for the mix down in the Sequence settings then? I seemed to do the trick for me? Should I include this in my work-flow because it is not always necessary to use Soundtrack Pro just for laying down a music bed?
    Thanks

  • NVGRE Gateway Cluster Problem

    Hello
    We have following setup:
    Management Hyper-V hosts running WAP, SPF and SCVMM 2012 R2 components
    Gateway Hyper-V host: single node gateway hyper-v host, configured as a single node cluster to be able to join extra hardware in the future
    this Hyper-V host runs 2 Windows Server Gateway VMs,configured as a failover cluster.
    The following script is used to deploy these windows server gateway VMs as a high available NVGRE gateway service:
    http://www.hyper-v.nu/archives/mscholman/2015/01/hyper-v-nvgre-gateway-toolkit/
    two tenant Hyper-V hosts running VMs which are using network virtualization
    The setup is completed successfully and when creating a tenant in WAP and creating VM network for this tenant using NAT, the VMs of this tenant are accessible and can access Internet using the HA Gateway cluster.
    The Gateway Hyper-V host and NVGRE Gateway VMs are running in a DMZ zone, in a DMZ Active Directory Domain.
    Management and Tenant Hyper-V hosts, incl all Management VMs, are running in a dedicated internal Active Directory domain.
    Problems start when we failover the Windows Server Gateway service to the other VM node of the NVGRE Gateway cluster. We see in the lookup records on the Gateway Hyper-V host that the MAC address of the gateway record for tenants is updated with the new
    MAC address of the VM node running the gateway service.
    But in SCVMM, apparently, this record is not updated. The tenant hosts still use the old MAC address of the other Gateway VM node.
    When looking in the SCVMM database, we can also see that in the VMNetworkGateway table that the record representing the gateway of the tenant, still points to the MAC address of the PA network adapter of the other node of the NVGRE Gateway cluster, not to the
    new node on which the gateway service is running after initiating a failover.
    On the tenant hyper-v hosts, the lookup record for the gateway also points to the old node as well.
    When manually changing the record in the VMNetworkGateway table to the new MAC address, and refreshing the tenant hosts in SCVMM, all starts working again and the tenant VMs can access the gateway again.
    Anybody else facing this issue? Or is running a NVGRE Gateway cluster on a single Hyper-V node not supported?
    To be complete, the deployed VMs running the gateway service are not configured as HA VMs.
    Regards
    Stijn

    If i understand your post correctly you have a single Hyper-V Host running 2 GW VM's. I think the problem is that when you deploy a HA VM Gateway Cluster it wants to create a Cluster Resource (PA IP Address) on the Hyper-V host as well. So when you run 2
    hyper-v hosts and 2 gw vm's and you move the active role to another host it will move the Provider Address to the other Hyper-V host as well. I believe this is by design. You should ask yourself also the question why running 2 vm's in a cluster on the same
    node ;-)
    I would recommend to use 2 node Hyper-V Host Cluster (This is needed for the HA PA Address, And not necessary for your GW VM's )
    Then run the deployment toolkit again. Now when that's done again, take a close look on how the Active node on the Hyper-V host has the corresponding PA assiogned on that Hyper-V host. Then do a failover, refresh the cluster manager and take notice
    of the PA address that has moved along to the other Hyper-V host that is the active one. It is diffuclt to explain, in a couple of sentences but i hope you have the opportunity to build the 2nd Hyper-V host aswell and create a cluste.
    Side note: if you want to keep the excising VM Gateway cluster. remove all gateways from VM Networks and remove the Gateway service from VMM. Then provision the second Hyper-V Host, Configure Cluster, Live migrate 1 GW VM node to it. Reconfigure
    Shared VHDX for quorum and csv and  then add back the network service again. Don't try to leave it as a network service in VMM and move the VM to another node. It will not work when failover.
    Best regards, Mark Scholman. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Compressor transcode problem

    I have recent problem with Compressor: ridiculously slow at transcoding D7 files to Pro Res. Problem not actually in the transcode itself but takes the computer a bizarrely long time to apply the codec to each item before I can submit the batch: spinning wheel of doom for hours!
    Strange thing is it's happened recently after working ok in the past and I cant see whats changed. Reckon it can only be an OSX update BUT i've seen this (unresolved) problem on old threads here and elsewhere, all archived now for some reason and no real answers for such a common issue. Can only do 10 files at a time but this is hardly 'batch processing' !
    I've tried Compressor Repair from here: http://slccut.com/tutorials/40-apple...ng-preferences but this doesn't seem to help. Disconnected all ext HD's but no joy. I'm tearing my hair out as I've got 200 D7 files to transcode to ProRes and a deadline to meet! Anyone got any feasible ideas? (Would updating to latest OS help? I realise this sounds crazy 'solution' given that it was ok before on same OS but Im desperate now)
    Compressor 3.5.3 (FCS3)
    OS X 10.6.8 on MacPro 12 core, 12GB RAM

    No, looks like you are right thanks Russ: easy to batch process and you can set up a Preset for your encoder settings and SUPER fast compared to Compressor - which is both exciting and slightly worrying at the same time. (How can Streamclip do it so fast if Compressor is so horribly lumpy? Does it miss something out? Will FCP be just as happy with the files? Has my time spent transcoding in the past just been cut down to nothing with no more swearing at the computer and no more hanging around? ) We'll see. I'll let you know...

  • Cluster Problems??

    Hi All,
    Need some help we have a SAP 4.6C install on a Microsoft cluster with a MSQL database one node in the
    Cluster is corrupt and needs to be rebuilt my question to you all is can one node of the cluster be built
    Or will both nodes have to be rebuilt.
    If so where can I find the documentation to do this can it result in any other problems.
    Thanks
    John

    Hello - The nature of MSCS is failover. Thus one node failure = one node recovery. MSCS documentation would suffice here.
    Regards.

  • BorderManager Cluster problems

    I have set up a 2 node NW 6.5 SP8 cluster to run BorderManager 3.9 SP2. I don't have a 'Split Brain Detector' (SBD) partition; the servers only monitor each other through the LAN heartbeat signal that is being sent by the master and replies by the slave. This has worked well from a high availability perspective but I keep running into a situation where both nodes will go 'active'.
    Usually, I have Node 0 set as both the cluster master and the host of the NBM proxy resource. Node 1 is then in standby - ready to load the proxy service and assume the proxy IP address if node 0 dies. At some point (the time is variable in days 2 - 5 and doesn't seem to be related to network load) Node 0 will think that Node 1 has failed and will show that on the Cmon console. Shortly afterwards Node 1 will think that Node 0 has failed and bind the proxy IP and cluster master IP and load the proxy. At this time I have two servers; both with the same Cluster Master IP bound and the proxy IP bound and proxy.nlm loaded!
    I can access Node 0 through rconj and it appears to be working fine. If I do a 'display secondary ipaddress' I can see it has both the proxy IP and Cluster Master IP bound to it. The same thing is the case for Node 1. I unload the proxy on Node 0 and reset the server. When it comes back up, it joins the cluster just fine and there doesn't appear to be any other problem.
    Has anyone else seen this behavior? (Craig???)
    thanks,
    Dan

    In article <[email protected]>, Dchuntdnc wrote:
    > but I keep running into a situation where
    > both nodes will go 'active'.
    I've got one of those situations too, at a client.
    >
    > Usually, I have Node 0 set as both the cluster master and the host of
    > the NBM proxy resource. Node 1 is then in standby - ready to load the
    > proxy service and assume the proxy IP address if node 0 dies. At some
    > point (the time is variable in days 2 - 5 and doesn't seem to be related
    > to network load) Node 0 will think that Node 1 has failed and will show
    > that on the Cmon console.
    This sounds familiar, except for me it happens within hours.
    > Shortly afterwards Node 1 will think that
    > Node 0 has failed and bind the proxy IP and cluster master IP and load
    > the proxy. At this time I have two servers; both with the same Cluster
    > Master IP bound and the proxy IP bound and proxy.nlm loaded!
    Yep. Gets annoying, to say the least!
    >
    > I can access Node 0 through rconj and it appears to be working fine.
    > If I do a 'display secondary ipaddress' I can see it has both the proxy
    > IP and Cluster Master IP bound to it. The same thing is the case for
    > Node 1. I unload the proxy on Node 0 and reset the server. When it
    > comes back up, it joins the cluster just fine and there doesn't appear
    > to be any other problem.
    Yep.
    >
    > Has anyone else seen this behavior? (Craig???)
    I have definitely fought this issue, but only on one (of many) BM cluster.
    Both nodes of the cluster are on old servers, and when the proxy is
    active, it is exceptionally busy. (More than 2000 users, and plenty of LAN
    bandwidth). I was on site at the client working on this (and a lot of
    other projects) and I never was able to get to the bottom of it. The fact
    that the server was so busy (24x7) made it hard to experiment on. My hope
    at this point is to get decent newer hardware in there to replace the
    7-year old nodes.
    This happened when one server was BM 3.8 and the other BM 3.9, but it
    continued to happen when I upgraded both to 3.9sp2. It also happened even
    though I moved the heartbeat to dedicated nics with a crossover cable.
    I'm thinking that something causes the LAN drivers to hiccup long enough
    for the server to stop responding to heartbeat - but the proxy seems to
    work continuously without showing a 30-second pause anywhere.
    For the time being, I've left the oldest node not loading cluster
    services. It's a manual failover at this time, but that's better than
    nothing. (And the primary node is quite stable anyway, for months and
    months at a time).
    Craig Johnson
    Novell Support Connection SysOp
    *** For a current patch list, tips, handy files and books on
    BorderManager, go to http://www.craigjconsulting.com ***

  • Patchin portal cluster problem

    I am trying to run, Portal patch 13 on a WAS cluster.
    t
    The problem I am getting is that the patch installation asks for a "username and password" for administrator
    When I enter the details I get an error.
    My question is that if it is in safe modde, how is the install checking these details.  I cannot log into the visual admin when the cluster is in safe mode.
    Anybode else have this problem.
    Thanks

    I think I just figured out how to use the sage mode.  It basically just limits the cluster to 1 server and 1 dispatcher.  Youre right, the same result can be achieved with the config tool
    Thanks

Maybe you are looking for

  • Designer 9.0.2.8  vs  9.0.4.4

    Hi, I'm currently using designer 6i but plan to migrate to designer 9i. So I was wondering which are the main differences between the version 9.0.2.8 (9i) and the version 9.0.4.4 (10g) of the Oracle Designer. Which of the 2 version supports DB featur

  • Wallstreet G3 top left quarter of my screen not working.

    I have a PowerBook G3 (Wallstreet), and had OS 8.5 installed and running fine. I have recently upgraded to Tiger using Xpostfacto adn now have a problem with my display. If the screen resolution colour is set to Thousands I have different coloured li

  • Do multiple EJB servers coordinate?

    At the moment, we are using Apache with Jakarta on the web server, which invokes EJBs on Oracle on a different box. Only the Oracle box thus has an EJB server and container, and at the moment all our EJBs are for database access only. Now, if we want

  • S_ALR_87012301 : problem when i print a report

    Hi, When i print a report from Tcode : S_ALR_87012301 i have this issue : i have a report without Subtotals knowing that i display them before excuting the print. Please advise Regards. Edited by: jehade el aoumari on Jul 2, 2008 11:50 AM

  • TS3221 error Oxe8000065 on Iphone when connecting to a MAC

    MY IPHONE 4 WILL NOT TURN ON AT ALL.  WHEN I SYNC TO MY MAC I GET THE ERROR OXE8000065.  I RESTARTED MY COMPUTER AND MY ITUNES IS UP TO DATE.  ANY SUGGESTIONS?