Unable to create cluster "Qmasterd not running"

Hi
I was trying to assign a compressor cluster and there were none available in the FCS Admin preferences. When I tried to create one on the server I got a message saying "Qmasterd was not running". How do I get Qmaster running again?
Thanks

We finally talked to Apple tech support and I think we have it solved. Re-running the install seemed to do it, although we had to re-install again once since then. I hope it doesn't become a habit. I was surprised that compressor re-installed without the extensive file deleting that I have had to do on my workstations in the past.
Thanks for the suggestion. When it quits again I'll try restarting it.

Similar Messages

  • "qmasterd not running" - how this happened and what we did to fix it.

    For this with the message "qmasterd not running".
    This may be of help to those of you who are unsuccessful in
    • initialising qmasterd
    • can't sucessfully 'START' qmaster from the system preferences panel
    • can't start qmasterd from the shell or the startupitems
    • can't establish a cluster in qmasteradmin.app
    • etc etc
    Taste will vary ofcourse.
    SOLUTION: after 2 days of concentrated troubleshooting on this we REINITIALISED the startup disk and RE_INSTALLED THE 10.4.8 O/S and all the applications and replaced (copied) the user data on the primary startup disk.
    Affected H/W:
    • g5 QUAD w/8GB memory and 3.7TB disk (including 2 x 250SATA internal DDMs) with 7800graphic card
    • 30inches HD Cinema display
    HIstory:
    • decided to UPGRADE internal 2 x 250GB SATA1 DDM's with 2 x 500GB sATA2 DDMS because of storage shortage.
    • on advice, we used APPLE MIGRATION ASSITANT to migrate 100% functional OS 10.4.8 startup disk (Macintoish HD) to external FW disk that had a fullfunctional 10.4.8 and apple pro apps as a back up. (This proved to be a very bad error of judgement - a mistake)
    • all data migrated over to external disk sucessfully. System restarted on EXTERNAL DISK and tests (TTPRO) and user apps run - all is ok
    • replaced G5 internal UPPER SATA1 250GB disk with new SATA2 500GB .
    • using G5 startup DVD's , built and initialised OS/X 10.4.8 system on NEW G5 QUAD internal UPPER 500GB SATA2 disk as Macintosh HD.
    • used Apple MIGRATION ASSISTANT to migrate all user and apps and ALL the files and objects from the working EXTERNAL disk we made earlier.
    • reatrted and all looked fine. FCS, SHAKE, MOTION etc etc.. all business apps. UNTIL!..
    • we could NOT get qmasterd to start - msg "qmasterd not running"
    • restarted system many times.. no success.
    ERROR SYMPTOMS:
    • noticed sever apps were corrupted such as WACOM TABLET, LIttle SNITCH and others - these were remidied by reinstallation ... however..
    • console crash reporter contained MANY dumps of qmasterd with general addressing exceptions:
    Exception: EXCBADACCESS (0x0001)
    Codes: KERNPROTECTIONFAILURE (0x0002) at 0x0nnnnnnn
    also r31 contined the page aligned address of the exception.. and this was ALWAYS different.. so assumed a data driven (parm??) corruption.
    • other symptions include that need to kill -9 (force quit) qmasterd as it would NEVER shutdown.
    WHat else we did to try and fix this:
    • repaired disk permissions (disk utility.app) - no luck...
    • repaired qmaster files disk permissions as per http://www.kashum.com/blog/1152611689 and follwd instructions.. no luck
    • removed QMASTER as per the instructions at http://docs.info.apple.com/article.html?artnum=93234 - reinstalled - NO LUCK - still dumps... ;(
    • removed all the qmasterd parms hoping that would help - no luck
    • used many sugesstions form these discussion, COW etc .. including disconnecting all networks... no luck
    • tried out most of the suggestions including an excellent on from Richard BF at http://www.kashum.com/blog/1152611689 - a trully excellent entry for those with the issue of Unable to connect to background process' error dialog problem.
    • einstalled QMASTER for SHAKE4.1 disks - no luck
    • resinstalled FCS (2 hours later) - no luck
    • restarted on secondary system (the external disk) and QMASTER works fine!
    • reinstalled the original 250GB SATA1 - restarted form it to verify QMASTER was ok.. no dumps existed.
    • reinstalled OS 10.4.8 (OS only) .. no luck
    • used CARBONCOPY (CCC) to make new disk - this failed miserably .. will not use this again ever....
    final outcome:
    • concluded that APPLE MIGRATION ASSISTANT had somehow corrupted some objects in the SECOND GENERATION of copy) from external DISK to new internal 500GB SATA2 drive.
    • threw the towel in and initialised the G5 QUAD's UPPER internal DDM (500GB SATA2) - installed a new OS to 10.4.8 + maint
    • tediously installed every APP including ADOBE CS2 and the CS3 photoshop whilst I mess around calling Adobe locally for an activation code (I always need to do this I dunno why).
    so a great waste of 2 days.
    Summary: when QMASTER works it is terrific.. especially for rendering and transcoding... but when it doesn;t it is dreadful.
    Maybe someone with expertise could shed some light on this.
    cheers
    W
    Hong Kong

    HI Chris, I closed it because:
    • at the time + a few eeks after I posted it, I had few views of it .
    • it is just a reference document to suggest the trouble shooting procedures and headaches we endured to try and rectify it in the view that others may try this path
    • it is in fact old news for some who have FCS 2 now.
    The 'fix" is described in the "SOLUTION" part of my post.
    I agree your symptom you are having wth QMASTER is of great frustration.
    I would encourage you however to perserve to get this working because wehn set up it works very well.
    fwiw

  • ContentAgent not found error OR qmasterd not running error in Compressor

    Hi guys,
    I have problems with Compressor 3.0.1 (related to qmaster i guess). i red many articles around about "qmasterd not running" or, "ContentAgent not found" error message on Compressor.
    I tried to run it from apple qmaster in the preferences system but the same problem. i tried to launch qmasterd in command line ... no success. I erased anything with the name qmaster or compressor (preferences, Applications support, Receips, repaired authorisations, etc...) and made a brand new install from the original DVD).
    Before doing that, compressor wasn't even lauching successfully. After this re-install, it is running but i get these errors from one or the other side. I can say that i erased all related files with attention but still do not understand what to do!
    on my anotyher mac (intel based 2x3GHz dual core) everything is running fine.
    Do someones know what can cause this ? what should i do ? i prefer a solution without beeing obliged to reinstall the whole config!!! oh no!
    Thank you for your help, even a small one
    Best Regards
    Sam

    I told myself: let's do it once again and this time ERASE ALL
    In my last try, i had forgot to erase compressor.framework folder with all its contents.
    Now, it is working just perfect, even the cluster between both computers
    Best.R.
    Sam

  • Unable to create mbean: Could not create provider JDBCDataSource

    I copy ant code from http://middlewaremagic.com/weblogic/?p=2504 and change it as follow. In the first time of running ant, it works fine and I can see new datasource.
    Then, I delete this datasource from weblogic console. I run the ant script again . But I get errors: BUILD FAILED
    C:\JdevWorkspace\ANTdatasource\Project1\build.xml:30: Unable to create mbean: Could not create provider JDBCDataSource
    I use weblogic 10.3.4
    <?xml version="1.0" ?>
    <project name="deploy" default="makeDataSource" basedir=".">
    <property name="wls.username" value="weblogic" />
    <property name="wls.password" value="welcome1" />
    <property name="wls.url" value="t3://localhost:7001" />
    <property name="wls.targetServer" value="AdminServer" />
    <property name="wls.domainName" value="SOAdomain5" />
    <!--<property name="database.url" value="jdbc:pointbase:server://localhost:9092/demo" />-->
    <property name="database.url" value="jdbc:oracle:thin:@localhost:1521:orcl"/>
    <!--<property name="database.driver" value="com.pointbase.jdbc.jdbcUniversalDriver" />-->
    <property name="database.driver" value="oracle.jdbc.xa.client.OracleXADataSource"/>
    <property name="database.user" value="dstest" />
    <property name="database.password" value="dstest" />
    <property name="weblogic.jar" value="E:\Jdeveloper_11115\wlserver_10.3\server\lib" />
    <echo message="${weblogic.jar}\weblogic.jar"/>
    <taskdef name="wldeploy" classname="weblogic.ant.taskdefs.management.WLDeploy">
    <classpath>
    <pathelement location="${weblogic.jar}\weblogic.jar"/>
    </classpath>
    </taskdef>
    <taskdef name="wlconfig" classname="weblogic.ant.taskdefs.management.WLConfig">
    <classpath>
    <pathelement location="${weblogic.jar}\weblogic.jar"/>
    </classpath>
    </taskdef>
    <target name="makeDataSource">
    <wlconfig username="${wls.username}" password="${wls.password}" url="${wls.url}">
    <query domain="${wls.domainName}" type="Server" name="${wls.targetServer}" property="x" />
    <create type="JDBCConnectionPool" name="TestDS">
    <set attribute="CapacityIncrement" value="1"/>
    <set attribute="DriverName" value="${database.driver}"/>
    <set attribute="InitialCapacity" value="1"/>
    <set attribute="MaxCapacity" value="10"/>
    <set attribute="Password" value="${database.password}"/>
    <set attribute="Properties" value="user=${database.user}"/>
    <set attribute="RefreshMinutes" value="0"/>
    <set attribute="ShrinkPeriodMinutes" value="15"/>
    <set attribute="ShrinkingEnabled" value="true"/>
    <set attribute="TestConnectionsOnRelease" value="false"/>
    <set attribute="TestConnectionsOnReserve" value="true"/>
    <set attribute="TestTableName" value="SYSTABLES"/>
    <set attribute="URL" value="${database.url}"/>
    <set attribute="Targets" value="${x}" />
    </create>
    <create type="JDBCDataSource" name="TestDS" >
    <set attribute="JNDIName" value="jdbc/TestDS"/>
    <set attribute="PoolName" value="TestDS"/>
    <set attribute="Targets" value="${x}" />
    </create>
    </wlconfig>
    </target>
    </project>

    Hi, i have tried with initialcapacity 0 ..this time build is successful but datasource is getting created without jndi name :(
    Plz let me know if u have any solution for this
    Plz check the below Script
    <?xml version="1.0"?>
    <!--Weblogic Credentials-->
    <project name="deploy" default="makeDataSource" basedir=".">
         <property name="weblogic.jar" location="C:/Oracle/Middleware/wlserver_10.3/server/lib/weblogic.jar"/>
         <property name="wls.username" value="weblogic" />
         <property name="wls.password" value="weblogic" />
         <property name="wls.url" value="t3://000.00.000.000:7001" />
         <property name="wls.targetServer" value="AdminServer" />
         <!--<property name="wls.targetServer" value="soa_server1" />-->
         <property name="wls.domainName" value="SOA" />
         <!--DataBase Credentials-->
         <property name="database.url" value="jdbc:oracle:thin:@000.00.000.000:1601:DATA1" />
         <property name="database.driver" value="oracle.jdbc.xa.client.OracleXADataSource" />
         <property name="database.user" value="USER" />
         <property name="database.password" value="PASSWORD" />
         <taskdef name="wldeploy" classname="weblogic.ant.taskdefs.management.WLDeploy" classpath="${weblogic.jar}"/>
         <taskdef name="wlconfig" classname="weblogic.ant.taskdefs.management.WLConfig" classpath="${weblogic.jar}"/>
         <!--Create JNDI name and JDBC Connection pooling-->
         <target name="makeDataSource" >
         <wlconfig username="${wls.username}" password="{wls.password}" url="{wls.url}">
         <query domain="${wls.domainName}" type="Server" name="${wls.targetServer}" property="x" />
         <create type="JDBCConnectionPool" name="TestCaseDS" >
              <set attribute="CapacityIncrement" value="1"/>
              <set attribute="DriverName" value="${database.driver}"/>
              <set attribute="InitialCapacity" value="0"/>
              <set attribute="MaxCapacity" value="50"/>
              <set attribute="StatementTimeout" value="600"/>
              <set attribute="Password" value="USER"/>
              <set attribute="Properties" value="user=PASSWORD"/>
              <set attribute="RefreshMinutes" value="0"/>
              <set attribute="ShrinkPeriodMinutes" value="15"/>
              <set attribute="ShrinkingEnabled" value="true"/>
              <set attribute="TestConnectionsOnRelease" value="false"/>
              <set attribute="TestConnectionsOnReserve" value="true"/>
              <set attribute="TestConnectionsOnCreate" value="true"/>
              <set attribute="TestTableName" value="SQL SELECT 1 FROM DUAL"/>
              <set attribute="URL" value="${database.url}"/>
              <set attribute="Targets" value="${x}" />
         </create>
         <create type="JDBCDataSource" name="TestCaseDS">
              <set attribute="JNDIName" value="jdbc/TestCaseDS"/>
              <set attribute="PoolName" value="TestCaseDS"/>
              <set attribute="Targets" value="${x}"/>
         </create>
    </wlconfig>
    </target>
    </project>
    Thanks in advance ,
    Edited by: soa.dev on 10-Sep-2012 21:58

  • Unable to create cluster, hangs on forming cluster

     
    Hi all,
    I am trying to create a 2 node cluster on two x64 Windows Server 2008 Enterprise edition servers. I am running the setup from the failover cluster MMC and it seems to run ok right up to the point where the snap-in says creating cluster. Then it seems to hang on "forming cluster" and a message pops up saying "The operation is taking longer than expected". A counter comes up and when it hits 2 minutes the wizard cancels and another message comes up "Unable to sucessfully cleanup".
    The validation runs successfully before I start trying to create the cluster. The hardware involved is a HP EVA 6000, two Dell 2950's
    I have included the report generated by the create cluster wizard below and the error from the event log on one of the machines (the error is the same on both machines).
    Is there anything I can do to give me a better indication of what is happening, so I can resolve this issue or does anyone have any suggestions for me?
    Thanks in advance.
    Anthony
    Create Cluster Log
    ==================
    Beginning to configure the cluster <cluster>.
    Initializing Cluster <cluster>.
    Validating cluster state on node <Node1>
    Searching the domain for computer object 'cluster'.
    Creating a new computer object for 'cluster' in the domain.
    Configuring computer object 'cluster' as cluster name object.
    Validating installation of the Network FT Driver on node <Node1>
    Validating installation of the Cluster Disk Driver on node <Node1>
    Configuring Cluster Service on node <Node1>
    Validating installation of the Network FT Driver on node <Node2>
    Validating installation of the Cluster Disk Driver on node <Node2>
    Configuring Cluster Service on node <Node2>
    Waiting for notification that Cluster service on node <Node2>
    Forming cluster '<cluster>'.
    Unable to successfully cleanup.
    To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.
    Event Log
    =========
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          29/08/2008 19:43:14
    Event ID:      1570
    Task Category: None
    Level:         Critical
    Keywords:     
    User:          SYSTEM
    Computer:      <NODE 2>
    Description:
    Node 'NODE2' failed to establish a communication session while joining the cluster. This was due to an authentication failure. Please verify that the nodes are running compatible versions of the cluster service software.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{baf908ea-3421-4ca9-9b84-6689b8c6f85f}" />
        <EventID>1570</EventID>
        <Version>0</Version>
        <Level>1</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2008-08-29T18:43:14.294Z" />
        <EventRecordID>4481</EventRecordID>
        <Correlation />
        <Execution ProcessID="2412" ThreadID="3416" />
        <Channel>System</Channel>
        <Computer>NODE2</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">node2</Data>
      </EventData>
    </Event>
    ====
    I have also since tried creating the cluster with the firewall and no success.
    I have tried creating the node from the other cluster and this did not work either
    I tried creating a cluster with just  a single node and this did create a cluster. I could not join the other node and the network name resource did not come online either. The below is from the event logs.
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          01/09/2008 12:42:44
    Event ID:      1207
    Task Category: Network Name Resource
    Level:         Error
    Keywords:     
    User:          SYSTEM
    Computer:      Node1.Domain
    Description:
    Cluster network name resource 'Cluster Name' cannot be brought online. The computer object associated with the resource could not be updated in domain 'Domain' for the following reason:
    Unable to obtain the Primary Cluster Name Identity token.
    The text for the associated error code is: An attempt has been made to operate on an impersonation token by a thread that is not currently impersonating a client.
    The cluster identity 'CLUSTER$' may lack permissions required to update the object. Please work with your domain administrator to ensure that the cluster identity can update computer objects in the domain.

    I am having the exact same issue... but these are on freshly created virtual machines... no group policy or anything...
    I am 100% unable to create a Virtual Windows server 2012 failover cluster using two virtual fiber channel adapters to connect to the shared storage.
    I've tried using GUI and powershell, I've tried adding all available storage, or not adding it, I've tried renaming the server and changing all the IP addresses....
    To reproduce:
    1. Create two identical Server 2012 virtual machines
    (My Config: 4 CPU's, 4gb-8gb dynamic memory, 40gb HDD, two network cards (one for private, one for mgmt), two fiber cards to connect one to each vsan.)
    2. Update both VM's to current windows updates
    3. Add Failover Clustering role, Reboot, and try to create cluster.
    Cluster passed all validation tests perfectly, but then it gets to "forming cluster" and times out =/
    Any assistance would be greatly appreciate.

  • Qmasterd not running error message

    I am running FCPS 2 Compressor 3.0.5 and I haven't rendered anything out for about a month.  But now when I try to start up the Qmaster services it tells me that the qmasterd is not running  and to consult my manual for information...I have read and tried a number of solutions and still can't get it up and running.  The only thing that is making me wonder why it isn't working is that I have installed a trial of Avid and also of Premeire Pro.  Could these have screwed up my Qmaster?  If so what is the solution or where can I find the solution online....
    Thanks,
    Holly

    I told myself: let's do it once again and this time ERASE ALL
    In my last try, i had forgot to erase compressor.framework folder with all its contents.
    Now, it is working just perfect, even the cluster between both computers
    Best.R.
    Sam

  • Recently, iPhoto will no longer allow my to export photos onto my folders on my desktop. It just says that it is unable to create file. Not sure why this is happening?

    iPhoto is no longer allowing me to export & resize photos into a file on my desktop. It just states that it is Unable to create file on desktop. I'm not sure what this means on how to correct this issue. Any support would be greatly appreciated!

    Then do the following:
    Fix #1
    Launch iPhoto with the Command+Option keys held down and rebuild the library.
    Select the options identified in the screenshot.
    If Fix #1 fails to help continue with:
    Fix #2
    Using iPhoto Library Manager  to Rebuild Your iPhoto Library
    Download IPhoto Library Manager 4 for OS XC 10.6.8 and iPhoto  8.1.2 and later  or iPhoto Library Manager 3 (for OS X 10.5.8 and iPhoto 7.1.5 and earlier) and launch.
    Click on the Add Library button, navigate to your Home/Pictures folder and select your iPhoto Library folder.
    Now that the library is listed in the left hand pane of iPLM, click on your library and go to the File ➙ Rebuild Library menu (iPLM 3) or Library ➙ Rebuild Library menu (iPLM 4) option.
    In the next  window name the new library and select the location you want it to be placed.
    Click on the Create button.
    Note: This creates a new library based on the LIbraryData.xml file in the library and will recover Events, Albums, keywords, titles and comments but not books, calendars or slideshows. The original library will be left untouched for further attempts at fixing the problem or in case the rebuilt library is not satisfactory.

  • JCORBA Clients created using JDeveloper not running

    Hi!
    This is about a problem that I'm facing creating and running
    JCORBA clients on JDeveloper 1.1. I followed the example provided
    in the documentation for JCORBA applications provided in the site
    www.olab.com.
    1. The server object compiles just fine. I have deployed it
    successfully in the Oracle Application Server 4.0.
    2. The client application was created and it compiled
    successfully. However,when I tried to run it, it gave an error
    while creating the objectFactory. I'm copying the error message
    below.
    java.lang.UnsatisfiedLinkError: no wrbjidl40 in shared library
    path
    at java.lang.Runtime.loadLibrary(Compiled Code)
    at java.lang.System.loadLibrary(Compiled Code)
    at
    at org.omg.CORBA.ORB.init(Compiled Code)
    at oracle.oas.jco.ObjectFactory.initializeORB(Compiled
    Code)
    at oracle.oas.jco.ObjectFactory.<init>(Compiled Code)
    at oracle.oas.jco.ObjectFactory.<init>(Compiled Code)
    at StackClient.main(Compiled Code)
    Ideas anyone, about why this is happening, or what i can do to
    workaround it?
    Thanks,
    Pradipto
    null

    Pradipto
    It looks to me that either the path or the classpath for the OMX
    is not set correctly, this can either be done on the server or
    the client. My guess is that this problem is on the client,
    please use the included demo file and exchange demo for your
    application,
    Federico
    @echo off
    REM ***
    REM *** Script to run the Stack demo.
    REM ***
    SETLOCAL
    REM Check for ORAWEB_HOME
    if NOT "%ORAWEB_HOME%" == "" goto oraweb_home_ok
    echo.
    echo Please make sure that ORAWEB_HOME is defined.
    echo.
    goto end
    :oraweb_home_ok
    if NOT "%1" == "" goto argument_ok
    echo.
    echo Pass as argument the URL for the WebServer with the
    installed application
    echo e.g., rundemo.bat http://machine:port/
    echo.
    goto end
    :argument_ok
    set JAVA=%ORAWEB_HOME%\jdk\bin\java
    set STUBS_JAR=%ORAWEB_HOME%
    \..\jco\apps\client\myStack\_client.jar
    set DRIVER_DIR=%ORAWEB_HOME%\..\jco\api
    REM Uncomment the two lines below if you're using OMX.
    rem set OMX_JARS=%ORAWEB_HOME%\classes\wrbjidl.jar;%ORAWEB_HOME%
    \classes\services.jar
    rem set ORB_JARS=%OMX_JARS%
    REM Uncomment the two lines below if you're using Visigenics.
    set VBJ_JARS=%ORAWEB_HOME%\jco\lib\vbjorb.jar;%ORAWEB_HOME%
    \jco\lib\vbjapp.jar
    set ORB_JARS=%VBJ_JARS%
    set CLASSPATH=ClientStack.jar;%STUBS_JAR%;%DRIVER_DIR%
    \jcoapi.jar;%ORB_JARS%;%CLASSPATH%
    REM Run the client.
    %JAVA% Demo %1
    :end
    ENDLOCAL
    Pradipto Biswas (guest) wrote:
    : Hi!
    : This is about a problem that I'm facing creating and running
    : JCORBA clients on JDeveloper 1.1. I followed the example
    provided
    : in the documentation for JCORBA applications provided in the
    site
    : www.olab.com.
    : 1. The server object compiles just fine. I have deployed it
    : successfully in the Oracle Application Server 4.0.
    : 2. The client application was created and it compiled
    : successfully. However,when I tried to run it, it gave an error
    : while creating the objectFactory. I'm copying the error
    message
    : below.
    : java.lang.UnsatisfiedLinkError: no wrbjidl40 in shared library
    : path
    : at java.lang.Runtime.loadLibrary(Compiled Code)
    : at java.lang.System.loadLibrary(Compiled Code)
    : at
    : at org.omg.CORBA.ORB.init(Compiled Code)
    : at oracle.oas.jco.ObjectFactory.initializeORB(Compiled
    : Code)
    : at oracle.oas.jco.ObjectFactory.<init>(Compiled Code)
    : at oracle.oas.jco.ObjectFactory.<init>(Compiled Code)
    : at StackClient.main(Compiled Code)
    : Ideas anyone, about why this is happening, or what i can do to
    : workaround it?
    : Thanks,
    : Pradipto
    null

  • Unable to create RAID - "could not unmount disk"

    I'm trying to create a concatenated RAID set from two drives. I can select the drives and individually unmount them using Disk Utility, and they disappear from the desktop. But when I try and create a RAID using the two drives, it tells me it can't because it is unable to unmount the disk.
    Any ideas?

    It appears that this is possible - in one case. If your system drive has been partitioned into multiple volumes, a volume from that drive can be used as part of a RAID (along with a volume from another drive) if you did not boot from the drive. Haven't tried booting from the Tiger DVD, but may just for fun. Follow all the other advise in this and similar threads though - don't expect a mirror (RAID 1) to provide back up and don't build a RAID 0 stripe using the partition that contains your boot OS.

  • Unable to create restore points not enough space on the disk. 0x80070070

    When trying to create a restore point I get error "There is not enough space on the disk. 0x80070070"  Which is wrong I have tons of space.  How do I correct this problem.  Thanks

    chief444 wrote: When trying to create a restore point I get error "There is not enough space on the disk. 0x80070070"  Which is wrong I have tons of space.  How do I correct this problem.  Thanks
    Hello chief444, You might try going to this Link , and see if the information there is what you require.
    Please click the White Kudos star on the left, to say thanks.
    Please mark Accept As Solution if it solves your problem.

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • Cannot create cluster

    Hi!
    I installed JES2005Q1 AppServer 8.1 and wanted to create a cluster. But:
    bash-3.00# ./asadmin create-cluster
    CLI001 Invalid Command, create-cluster. Use "asadmin help" for a list of valid commands.
    create-cluster is not a valid command. Also commands concerning node agents are missing.
    I did not install Pointbase & Samples, might this be the cause for this?
    Also in the administration web interface "Clusters", "Node Agents" and "Stand-Alone Instances" are missing in the tree.
    thx for any ideas,
    Chris

    Ok, found it. It was simply the Standard Edition installed, not the Enterprise Edition...

  • Windows 2008 r2 Cluster not starting - "unable to create security manager worker queues"

    Hello, following a power outage, we got a serious cluster error preventing the start of the cluster.
    We are trying to interpret the only four lines the cluster.log generates :
    00000330.000016cc::2014/09/26-10:44:06.348 ERR   [WTQ] bogus file creation failed, 2
    00000330.000016cc::2014/09/26-10:44:06.348 ERR   [WTQ] bogus file creation failed, 2
    00000330.000016cc::2014/09/26-10:44:06.348 ERR   [CS] Unable to create SecurityManager worker queues, 2
    00000330.000016cc::2014/09/26-10:44:06.363 ERR   Error 6
    AND if starting clussvc manually :
    Got ERROR_FILE_NOT_FOUND(2)' because of 'Error while creating the Security Manag
    er's Thread Pool' in
        000007fe:fd69940d( ERROR_MOD_NOT_FOUND(126) )
        00000000:001ff190( ERROR_MOD_NOT_FOUND(126) )
    We suspect a DLL problem (because of mod not found), but we are unable to find the ones involved even with process monitor.
    clusdb hive seems ok.
    The situation is serious, can anybody help, please ?

    Hi RodV,
    This error usually caused by cluster service fails to open a 
    handle to the \NUL device, Device manager shows the device instance in error state.
    Please check whether the following register value still exist, if not please backup your current registry then add the it.
    HKEY_LOCAL_MACHINE\SYSTEM\CURRENTCONTROLSET\ENUM\ROOT\LEGACY_NULL\0000\CONTROL
    ActiveService REG_SZ Null
    I am glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • 2008 Failover cluster unable to create computer account

    Hello,
    I have created a 2008 R2 Failover cluster and I am trying to add a Fail over File server to this.
    I get the dreaded
    Cluster network name resource 'OfMaClusterFS' failed to create its associated computer object in domain 'xxx.domain' for the following reason: Unable to create computer account.
    The text for the associated error code is: Access is denied.
    Please work with your domain administrator to ensure that:
    - The cluster identity 'OFMACLUSTER$' can create computer objects. By default all computer objects are created in the 'Computers' container; consult the domain administrator if this location has been changed.
    - The quota for computer objects has not been reached.
    - If there is an existing computer object, verify the Cluster Identity 'OFMACLUSTER$' has 'Full Control' permission to that computer object using the Active Directory Users and Computers tool.
    I have created clusters frequently in the past, on my own Domains that I am a domain admin of.  Now I am trying to make one on our larger corporate domain that I am not a domain admin of and get this error.
    By default, domain users can not add computer accounts to our domain.  I do however have an limited account that can add computers to the domain... but I have tried all the tricks I can think of to try and add the Network name to AD and no luck.#
    I have tried running the cluster service with this account, but it is still trying to use the OFMACLUSTER$ identity to create the Network name.  I have tried manually creating the network name using my limited account, but that doesn't work either,
    same error.  I don't have the ability to change permissions on the computer name I added for the network name to AD.
    I have raised a ticket to our wintel team to try and get them to help, but they aren't exactly the most responsive bunch.  I'm just wondering what the best way around this problem is if I am not a domain admin and I can't make the changes I need, or
    what concise instructions I can give to the domain admins so that they can help me out without saying that it is a security breach etc.
    I would appreciate any advice on this as it's now urgent and also something I will have to do in the future fairly regularly and don't want to get caught in the situation in the future.

    Hi jogdial,
    To create a cluster, the minimum permission is: Requires administrative permissions on the servers that will become cluster nodes. Also requires
    Create Computer objects and Read All Properties permissions in the container that is used for computer accounts in the domain.
    If you create the cluster name account (cluster name object) before creating the cluster—that is, prestage the account—you must give it the
    Create Computer objects and Read All Properties permissions in the container that is used for computer accounts in the domain. You must also disable the account, and give
    Full Control of it to the account that will be used by the administrator who installs the cluster.
    The related KB:
    Failover Cluster Step-by-Step Guide: Configuring Accounts in Active Directory
    http://technet.microsoft.com/en-us/library/cc731002(v=ws.10).aspx
    More information:
    How to Create a Cluster in a Restrictive Active Directory Environment
    http://blogs.msdn.com/b/clustering/archive/2012/03/30/10289577.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • SAPOSCOL not running in MS Cluster

    Hi, gurus:
    We have a problem with SAPOSCOL in a SAP ECC 6.0 system (SAP ECC 6.0 + NetWeaver 7.00 + Oracle 10.2 + Windows Server 2003 R2 Enterprise x64 Edition) running over a MS cluster:
    Transactions OS06/ST06 shows no data, and they show an info message wich states: SAPOSCOL not running ? (shared memory not available ). When we checked this issue, we noticed that, in fact, there is no sapcoscol.exe task running in any node.
    But when we try to start the service (both using microsoft services console and cmd commands) although we can see the process running in the node which owns all the resources, SAP seems not notice that. The information system shows in ST06>Operating System Collector>Status is the following:
    iinterval             0             sec.
    Collector Version:
    Date/time             05.09.2008 16:55:01
    Start of Collector
    Status report
    Collector Versions                         
      running                                   COLL 20.95     700     - 20.64     NT 07/10/17
      dialog                                   COLL 20.95     700     - 20.65     NT 08/02/06
    Shared Memory                              attached
    Number of records                         575
    Active Flag                              active     (01)
    Operating System                         Windows NT     5.2.3790 SP     2 BL-SAP2 4x AMD64 Level 1
    Collector PID                              0 (00000000)
    Collector                                   not running (process ID not found).
    Start time coll.                              Thu Jan 01     01:00:00 1970
    Current     Time                              Fri Sep 05     16:55:01 2008
    Last write access                         Mon Sep 01     11:28:23 2008
    Last Read  Access                         Fri Sep 05     15:54:00 2008
    Collection Interval                         10     sec     (next delay).
    Collection Interval                         10     sec     (last ).
    Status                                   read
    Collect     Details                         required
    Refresh                                   required
    Header Extention Structure                    
    Number of x-header          Records          1
    Number of Communication     Records          60
    Number of free Com.          Records          60
    Resulting offset to     1.data rec.               61
    Trace level                                   3
    Collector in IDLE -     mode ?               NO
      become idle after     300     sec     without     read access.|
      Length of     Idle Interval                    60     sec
      Length of     norm.Interval                    10     sec
    But saposcol.exe is running with a certain PID in the same note than SAP and Oracle under user sapservice<sid>
    We have tried to run saposcol in several ways (as, I have noted before: from microsoft service console, from cmd line using "net start saposcol", using the saposcol under C:\WINDOWS\SapCluster and the one under
    F:\usr\sap\PRD\sys\exe\run, fom the two nodes, accessing the cluster through several IPs...) and tried the commands saposcol -c and saposcol -k but we cannot get the saposcoll run. Moreover, we haven't found any log information. The only log we (and SAP) could find is the one located in C:\WINDOWS\SapCluster\dev_coll.
    This log remain frozen at September 1st:
          SAPOSCOL version  COLL 20.95 700 - 20.64 NT 07/10/17, 64 bit, multithreaded, Non-Unicode
          compiled at   Feb  3 2008
          systemid      562 (PC with Windows NT)
          relno         7000
          patch text    COLL 20.95 700 - 20.64 NT 07/10/17
          patchno       146
          intno         20050900
          running on    BL-SAP2 Windows NT 5.2 3790 Service Pack 2 4x AMD64 Level 15 (Mod 65 Step 3)
    12:04:16 01.09.2008   LOG: Profile          : no profile used
    12:04:16 01.09.2008   LOG: Saposcol Version  : [COLL 20.95 700 - 20.64 NT 07/10/17]
    12:04:16 01.09.2008   LOG: Working directory : C:\WINDOWS\SAPCLU~1
    12:04:16 01.09.2008   LOG: Allocate Counter Buffer [10000 Bytes]
    12:04:16 01.09.2008   LOG: Allocate Instance Buffer [10000 Bytes]
    12:04:17 01.09.2008   LOG: Shared Memory Size: 71898.
    12:04:17 01.09.2008   LOG: Connected to existing shared memory.
    12:04:17 01.09.2008   LOG: MaxRecords = 575 <> RecordCnt + Dta_offset = 614 + 61
    12:04:22 01.09.2008 WARNING: WaitFree: could not set new shared memory status after 5 sec
    12:04:22 01.09.2008 WARNING: Cannot create Shared Memory
    Kernel Info:
    Kernel release    700
    Compilation        NT 5.2 3790 Service Pack 1 x86 MS VC++ 14.00
    Sup.Pkg lvl.       146
    ABAP Load       1563
    CUA load           30
    Mode                opt
    Can anyone shed some light on the subject?
    Thank you very much and kind regards
    Edited by: Jose Enrique Sepulveda on Sep 6, 2008 2:10 AM

    Dear bhaskar:
    Thanks for your reply. We have considered balancing the system to the other node or reboot the system to free resources, in order to re-create the shared memory, but in the past, the balancing process (move resources from one node to the other) has caused problems. Since this is a critical system, stopping (or balancing) is not an option right now, and updating the kernel requires an ABAP stack reboot plus the kernel change : any changes in system configuration requires a longer approval/planning process than a reboot.
    Moreover, the OS collecting system and its display in OS06/ST06 has worked fine until now.
    Does anyone knows if a reboot has solved this kind of problem in a similar situation?
    Thanks in advance
    José Enrique

Maybe you are looking for

  • Credit memo

    Hello gurus, There is a credit for a shipment that happened back in June - It comes via the EDI210 (shipment costing) report 1. Isit possible that these credit invoices go into a "blocked status" - and not "reject" status - so that we could have them

  • Soniya here!! need help.... Creation of transfer Order. and confirmation.

    Hi all , I am in need of a fuction module to create a transfer order ( LT03 ) where i am able to pass / specify the source storage unit / handling unit from which the materials can be taken for  the TO. I also need a function module where i can confi

  • How do I create a Word & Excel shortcut?

    My operating system is Windows 7 This question was solved. View Solution.

  • Images in some Column-Heads are not visible

    Hi there, in an Webdynpro-Application I am using tables that are sorted in that way that is discribed in the Tables-Tutorial (with TableSort-Class). Everything works fine except that initially there is no image in the column-head although I set one i

  • Table for TDS sections and its description

    Hi, Is there any table where both TDS section and its description is getting stored? If this is possible can we relate the same to Vendor through any link? Thanks & Regards, Tapan