No Applicatons or Tasks Working

The Kid's EMAC has all of a sudden stopped perfoming any tasks or opening applications. The system boots up just fine. If I try to Open Explorer or any other application, the system sits and spins. Nothing happens. Same thing if I try to rebuild the desk top or empty the trash. HELP!!
Denbeach

No guarantee, but try this.
Trash the following Preference files & reboot.
ASLM
Finder
Mac OS
System
Doing this will change some of your Control Panel settings.
Cheers, Tom

Similar Messages

  • Catalyst Control Center: Host applicaton has stopped working

    I keep getting the error:
    Catalyst Control Center: Host applicaton has stopped working
    A problem caused the program to stop working correctly.  Windows will close the program and notify you if a solution is available.
    The only button on this window is "Close program"
    This error started on 01/04/15 (via the event viewer).
    The video driver for my AMD Radeon HD 7520G from AMD is dated 9/15/2014 with a version of 14.301.1001.0.  If I go to the HP support site, the driver is shows is dated 11/11/2013 with a version of 13.152.1.1000.

    Hi @cjssas 
    Welcome to the HP Support Forums!
    I understand that you are getting an application error from your Catalyst Control center. I am happy to assist with this.
    The first thing I would recommend trying is a system restore to a point before the problem began. It is possible that some other software installed or driver, or system update, may be the cause. So take a look at this:
    Using Microsoft System Restore (Windows 8)
    If that does not work, then I would download the driver again, uninstall the existing driver, restart the computer and then install the driver once more and see if the problem recurs.
    Incidentally the support website shows the latest hp customized version of all the drivers for that notebook. Hp customizes them to be compatible with the other hardware in the notebook. It is always possible to use a different driver, but there are no guarantees it will work.
    I hope this helps.
    Malygris1
    I work on behalf of HP
    Please click Accept as Solution if you feel my post solved your issue, it will help others find the solution.
    Click Kudos Thumbs Up on the right to say “Thanks” for helping!

  • NI-DAQmx task works in MAX or DAQ Assistant test panel but not in LabVIEW

    I am attempting to read a single AI channel from a PCI-6024E card via an SCB-68. I have created a NI-DAQmx Analog Input Voltage Task in MAX for this channel, sampling in contiuous aquisition mode at 100 kHz, 10000 samples at a time, with RSE terminal config. If I use the Test feature from MAX, the channel acquires data as expected.
    In LabVIEW, I call this task using an DAQmx Task Name Constant. If I right-click this constant and select "Edit Task", the Daq Assistant opens and I can use the Test feature from the DAQ Assistant to see that the data is still being acquired as expected.
    However, when I try to programmatically read this channel in LabVIEW using the VI "DAQmx Read (Analog Wfm 1Chan NSamp).vi", the VI returns a constant DC value of 500 mV, which I know is incorrect (I can monitor the signal across the two terminals in the SCB-68 with a DMM to know that the signal coming in varies as expected, and as I read using the test panels). This erroneous reading occurs even if I make a new VI, drop the task name constant on the diagram, right-click the task name constant and select "Generate Code - Example" and let LabVIEW create its own acquisition loop (which is very similar to what I already set up, using the "DAQmx Read" VI).
    Any ideas why the Test Panels work correctly but the LabVIEW code does not?

    Hello bentnail,
    I'm not sure why the test panels are readin the value correcly but the LabVIEW code does not, but there are a couple of things we can try.
    1) What happens if you just use the DAQ Assistant and place it on the block diagram? Does it read out the correct values?
    2) Try running a shipping example that comes with LabVIEW. "Acq&Graph Voltage-Int Clk.vi" should work well.
    3) What kind of signal are you expecting to read (peak to peak voltage, freqeuncy, etc.)?
    Thanks,
    E.Lee
    Eric
    DE For Life!

  • Execute Task works in one area but not the other

    I have two Execute SQL Tasks.  Each executes the same stored procedure from the SQL Statement property with a different parameter and they insert into the same table.  One is at the end of the package and only runs if a Rollback
    is done and it works fine.  The other runs when a script is performed and returns Failure.  When I run the other package and the Script fails the Execute SQL Task gets a green check mark but the table is not updated.
    I've messed with the properties and still no luck. 
    I can right-click on the SQL Task choose "Execute Task" and it works.
    I've run out of ideas.

    uspGenerateTasksForFailedSSISPackage is a stored procedure that inserts into a table. 
    ALTER PROCEDURE [dbo].[uspGenerateTasksForForFailedSSISPackage] 
     @strTaskDescription varchar(max) = NULL
     WITH RECOMPILE
    AS
    BEGIN
    SET NOCOUNT ON;
     Declare
      @dtEnteredDateAndDueDate date = GetDate(),
      @intTodoCategoryCode int = 13,
      @strEnteredBy varchar(20) = 'AUTO'
       INSERT INTO ToDos (UserID, EnteredBy, EnteredDate, DueDate, Description, CategoryCode)
       Values (@strEnteredBy, @dtEnteredDateAndDueDate, @dtEnteredDateAndDueDate, @strTaskDescription, @intTodoCategoryCode)

  • Fixed Duration Tasks work hours not updating properly

     I have a task of 2 days duration which is fixed duration; with 1 resource assigned when it is 30%, I add 2 more resources to it as even though duration is same work hours need to be increased. But I don't see any impact on work even when 2 resources
    are added. M I missing something

    Just check if the check box - Effort Driven is checked for that task, as in this case whatever are the hours in Work field that gets equally distributed between all resouces (in your case the newly added and existing resource). Uncheck that box
    and then assign resources; this should fix your calculations for the expected hrs. adding to Work field.
    Sapna Shukla

  • CS5 Idle Tasks work great until they stop

    I've got an Adobe InDesign CS5 ExtendScript (javascript for the search engines) that when the users do certain tasks it triggers some idle tasks. It works great most of the time. What I'm finding is that at some point the idle tasks stop and they just queue up. It doesn't seem to matter how long Adobe InDesign has been running.
    The code is constructed so that when it tries to create a new task that it checks if an existing task to do the same thing is already in the queue. If it already exists in the queue it doesn't add it again. So in that regard I'm not creating an infinate lists of tasks. But at some point it just stops processing the idle tasks in the queue, sometimes with just a single task to process.
    InDesign itself seems fine. But no more idle tasks will fire. I've checked the task manager (it is windows XP) and InDesign doesn't seem to be doing anything to prevent getting in some idle tasks.
    Any idea how I can figure out: A) what stopped it. B) How to start it back up?
    jsw

    I've got an Adobe InDesign CS5 ExtendScript (javascript for the search engines) that when the users do certain tasks it triggers some idle tasks. It works great most of the time. What I'm finding is that at some point the idle tasks stop and they just queue up. It doesn't seem to matter how long Adobe InDesign has been running.
    The code is constructed so that when it tries to create a new task that it checks if an existing task to do the same thing is already in the queue. If it already exists in the queue it doesn't add it again. So in that regard I'm not creating an infinate lists of tasks. But at some point it just stops processing the idle tasks in the queue, sometimes with just a single task to process.
    InDesign itself seems fine. But no more idle tasks will fire. I've checked the task manager (it is windows XP) and InDesign doesn't seem to be doing anything to prevent getting in some idle tasks.
    Any idea how I can figure out: A) what stopped it. B) How to start it back up?
    jsw

  • Task working in foreground, not working in background

    Hi Experts,
    I am facing one strange problem.
    I have created one Z BOR where i have written one method to fetch data from database.
    I have added one task in workflow which calls this method.
    When the task runs in foreground its perfectly fetches data.
    But when i make the task background it does not fetch 2-3 particular fields.
    I am not getting why this is happening.
    Can anybody give me solution for this,
    Thanks
    Sameer

    Hi,
    Check the method of BO, double click on it and check the general tab, make sure if Dialog  check box is not checked.
    Regards
    Dev.

  • Built-in wlst ant task does not work in weblogic 10.3.1

    Hi,
    We have an installer script that deploys an ear file to a weblogic managed server. The script also invokes the build-tin wlst ant task to bounce the managed server. However, in version 10.3.1 the wlst task seems to be broken. I get this error:
    [echo] [wlst] sys-package-mgr: can't create package cache dir, '/u00/webadmin/product/10.3.1/WLS/wlserver_10.3/server/lib/weblogic.jar/./java
    tmp/wlstTemp/packages'
    [echo] [wlst] java.io.IOException: No such file or directory
    [echo] [wlst] at java.io.UnixFileSystem.createFileExclusively(Native Method)
    [echo] [wlst] at java.io.File.checkAndCreate(File.java:1704)
    [echo] [wlst] at java.io.File.createTempFile(File.java:1792)
    [echo] [wlst] at java.io.File.createTempFile(File.java:1828)
    [echo] [wlst] at com.bea.plateng.domain.script.jython.WLST_offline.getWLSTOfflineInitFilePath(WLST_offline.java:240)
    [echo] [wlst] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [echo] [wlst] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [echo] [wlst] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [echo] [wlst] at java.lang.reflect.Method.invoke(Method.java:597)
    [echo] [wlst] at weblogic.management.scripting.utils.WLSTUtil.getOfflineWLSTScriptPath(WLSTUtil.java:63)
    [echo] [wlst] at weblogic.management.scripting.utils.WLSTUtil.setupOffline(WLSTUtil.java:214)
    [echo] [wlst] at weblogic.management.scripting.utils.WLSTInterpreter.<init>(WLSTInterpreter.java:133)
    [echo] [wlst] at weblogic.management.scripting.utils.WLSTInterpreter.<init>(WLSTInterpreter.java:75)
    [echo] [wlst] at weblogic.ant.taskdefs.management.WLSTTask.execute(WLSTTask.java:103)
    [echo] [wlst] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
    Obviously that is not a valid directory...so I am wondering what it is trying to do, and why. The wlst task worked perfectly in 10.3.0. No changes were made when attempting to run the script against 10.3.0 and 10.3.1, which tells me that something is different with the 10.3.1 setup. Here is the ant code I am running:
    <target name="init-taskdefs">
    <taskdef resource="net/sf/antcontrib/antcontrib.properties">
    <classpath>
    <pathelement location="ant-ext/ant-contrib.jar" />
    </classpath>
    </taskdef>
    <taskdef name="wldeploy" classname="weblogic.ant.taskdefs.management.WLDeploy" />
    <taskdef name="wlst" classname="weblogic.ant.taskdefs.management.WLSTTask" />
    </target>
    <macrodef name="wlShutdownServer">
    <attribute name="adminUser" default="${deploy.admin.username}" />
    <attribute name="adminPassword" default="${deploy.admin.password}" />
    <attribute name="adminUrl" default="${deploy.admin.url}" />
    <attribute name="serverTarget" />
    <sequential>
    <trycatch property="server.error">
    <try>
    <wlst failonerror="true"
    arguments="@{adminUser} @{adminPassword} @{adminUrl} @{serverTarget}">
    <script>
    adminUser=sys.argv[0]
    adminPassword=sys.argv[1]
    adminUrl=sys.argv[2]
    serverTarget=sys.argv[3]
    connect(adminUser,adminPassword,adminUrl)
    target=getMBean("/Servers/"+serverTarget)
    if target == None:
    target=getMBean("/Clusters/"+serverTarget)
    type="Cluster"
    else:
    type="Server"
    print 'Shutting down '+serverTarget+'...'
    shutdown(serverTarget,type,'true',15,force='true')
    print serverTarget+' was shut down successfully.'
    </script>
    </wlst>
    <!-- setDomainEnv.sh must have been called to set DOMAIN_HOME. Remove all leftover .lok files to allow server
    to start back up again. -->
    <echo message="Deleting any lok files that have not been removed..." />
    <delete failonerror="false">
    <fileset dir="${env.DOMAIN_HOME}/servers/@{serverTarget}" includes="**/*.lok"/>
    </delete>
    </try>
    <catch>
    <fail message="@{serverTarget} shutdown failed. ${server.error}" />
    </catch>
    <finally/>
    </trycatch>
    </sequential>
    </macrodef>
    Any help would be appreciated. Thanks!

    Well, it looks like passing something like "-Djava.io.tmpdir=/var/tmp/javatmp/`date +%Y%m%d`" to ant did the trick. I had to make sure that directory existed first, otherwise it threw a java ioexception.
    I still don't understand what changes between 10.3.0 and 10.3.1 to necessitate this change.

  • Task created to stop and start Health service not working.

    We have multiple servers being grayed out so we created a task to stop and start the health service on SCOM 2007 R2. We have created two tasks one to stop and another task to start. The STOP task works but the START tasks keeps running but there is no result
    as well as when i check in Services.msc it is still stopped. Below are the screen shots .
    Task created to STOP the health service which is running.
    Screen shot of service start task creation which is NOT working
    The same was created for another service windows audio. Which worked for both stop and stop (Created separate tasks).
    We also tried the recycle health service and chche which is also failing.
    Can any one please help. The issue is on both SCOM 2007R2 and 2012 R2 both. Is there any other way to touch the health service.
    We are facing the issue only with Healthservice.

    Hi All,
    Thank you for all your answers.
    @ dktoa -
    You are right we have to concentrate on the servers to determine why do they go grey (all in one site). But before we do that i felt i find a temporary solution. Then go deep investigation to solve the issue.
    @Yan LI: As per the likn provided above you. I overrided the servers but still the health services were not starting or restarting. When i created a custom group and added the servers to that  group in that site and then  enabled the override on Restart
    Health Service for a specific group and pointed towards that group then it worked.
    Thanks All.

  • Task Scheduler doesn't work

    Slightly leading title. It does work when you set the tasks up and works fine for a while. Then all the tasks begin failing at some random point.
    I have several small EXEs and a piece of kit called SyncBackup in the server. These are the tasks to be executed.
    Sticking with the EXEs I built myself which are .NET apps, these are lightweight apps which connect to a database and write to a log file.
    RDP in as Administrator and double-click the EXE - runs fine.
    Set up a Scheduled Task to run it every 5 minutes - fine. Right click, run task - executes perfectly. Job complete, last run status fine.
    Go away and leave the server for a few days - runs fine.
    Then it just stops working with code 0xC0000142. For every single Scheduled Task. Except one which I think returns 0xC01.
    Task History shows "Action completed" and "Task Completed" as if it's all working perfectly. But it isn't. No errors logged or evident there.
    I have these all set up to run as the Administrator for testing purposes.
    Why do these tasks run fine when set up and break "behind your back" at some later stage? What does this error code mean? How do I get these working and importantly, *stay working*?
    I've seen stacks of threads like this with no resolution. A common issue appears to be the Start In directory specification.
    It doesn't make any difference whether you specify the Start in directory or not. There are no spaces in the file names or paths anyway so quote marks irrelevant.
    And, these tasks do work initially - they just break later on.
    I've come across a piece of third party kit which emulates what the Windows Task Scheduler does and am tempted to install and use that instead.
    But the Windows Task Scheduler always used to work... ;)

    I was waiting until this evening to see if restarting the server would fix this.
    And it has.
    All the tasks work perfectly - those failing with both the error codes, so that's my own apps and also the SyncBackup tasks.
    For now...
    Digging around some more, I'm led to believe this might have something to do with "Desktop Heap" size.
    http://blogs.msdn.com/b/ntdebugging/archive/2007/01/04/desktop-heap-overview.aspx
    Thing is, this never used to happen up until very recently on one of my servers, I migrated to a new one (not because of this) mystified by the problem and hoping the problem would vanish. And it did, at first.
    One other server I have which runs SyncBackup tasks doesn't have this issue.
    All Windows Server 2008 R2 and all up to date with hotfixes.
    I'm reluctant to modify the Desktop Heap size without a more detailed understanding and still mystified as to why this all worked perfectly until about three weeks ago.

  • Getting Open MQ and Mule 2.0 work together

    Hi!
    I would like to ask some help on getting Open MQ work with Mule 2.0. I'm quite new to both technologies, and I can't get them to work together. Here is how I've tried so far:
    What I'm trying to achieve first, is that there are 2 queues in my open mq, and if something arrives in one queue, mule should get that from the queue and put it in the other queue.
    I've created a configuration file for mule:
    (sorry that it looks really crappy, for some reason, couldn't really get it to show nicely on this forum, however I edited it, it always made each line start on the left)
    <?xml version="1.0" encoding="UTF-8"?>
    <mule xmlns="http://www.mulesource.org/schema/mule/core/2.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:jms="http://www.mulesource.org/schema/mule/jms/2.0"
    xmlns:spring="http://www.springframework.org/schema/beans">
    <!-- Uncomment to download xsds from web instead of using the Eclipse XML Catalog.
    xsi:schemaLocation="
    http://www.mulesource.org/schema/mule/core/2.0 http://www.mulesource.org/schema/mule/core/2.0/mule.xsd
    http://www.mulesource.org/schema/mule/jms/2.0 http://www.mulesource.org/schema/mule/jms/2.0/mule-jms.xsd
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">
    -->
    <model name="JMSProba">
    <service name="JMS">
    <inbound>
    <jms:inbound-endpoint queue="MyQueue" synchronous="false"/>
    </inbound>
    <component class="disp.Dispatcher"/>
    <outbound>
    <outbound-pass-through-router>
    <jms:outbound-endpoint queue="out"/>
    </outbound-pass-through-router>
    </outbound>
    </service>
    </model>
    </mule>
    The service component class pretty much doesn't do anything for now, but is like this:
    package disp;
    public class Dispatcher {
    public Object dispatch(Object o){
    return o;
    I'm using the Mule IDE, and unfortunately I can't even test if this works or not, because when it starts the mule server, I get an error, which I don't understand why I get:
    org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'mule'.
    I really have no idea why I get this exception. Anyone knows what I did wrong in my configuration file? Also, can anyone tell me, if what I'm trying to do looks something like this, or I'm doing it all wrong? If anyone can help me get this all to work, I'd really appreciate it.
    I haven't seen any info on how to configure the Open MQ Connector, so I'm not sure how to do it. I've tried this:
    <connector name="jmsConnector" className="org.mule.providers.jms.JmsConnector">
    <properties>
    <property name="specification" value="1.1"/>
    <property name="connectionFactoryJndiName" value="ConnectionFactory"/>
    <property name="jndiInitialFactory" value="com.sun.jndi.fscontext.RefFSContextFactory"/>
    <property name="jndiProviderUrl" value="file:///C:/Temp"/>
    <property name="jndiDestinations" value="true"/>
    <property name="forceJndiDestinations" value="true"/>
    <map name="connectionFactoryProperties">
    <property name="brokerURL" value="vm://localhost:7676"/>
    </map>
    </properties>
    </connector>
    but it doesn't work, the error tooltip says "Invalid content was found starting with element 'connector'", and then the next error is at the first 'property' element, saying "Invalid content was found starting with element 'property'. One of '{"http://www.springframework.org/schema/beans":entry}' is expected. "
    Can someone please also help me with the connector creation? Since this had errors, I had to cut it out when trying to run my config file, that's why you can't see it in it.
    My next step (if this would work) would be to use this on 2 different JMS's - Mule gets the message from a queue from a certain JMS, then sends it to the queue of a different JMS. Of course I can't start working on this until my first task works, but I was wondering, how to create 2 different connectors and then use different connector for each of the endpoints.
    Thanks for any help in advance!
    Edited by: kissziszi on Jul 25, 2008 3:06 AM
    Edited by: kissziszi on Jul 25, 2008 3:09 AM

    Hi Pawan!
    Thank you a lot for your help! Unfortunately it's still not working. Here is my current config file:
    <?xml version="1.0" encoding="UTF-8"?>
    <mule xmlns="http://www.mulesource.org/schema/mule/core/2.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:spring="http://www.springframework.org/schema/beans"
    xmlns:stdio="http://www.mulesource.org/schema/mule/stdio/2.0"
    xmlns:jms="http://www.mulesource.org/schema/mule/jms/2.0"
    xmlns:vm="http://www.mulesource.org/schema/mule/vm/2.0"
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
    http://www.mulesource.org/schema/mule/core/2.0 http://www.mulesource.org/schema/mule/core/2.0/mule.xsd
    http://www.mulesource.org/schema/mule/jms/2.0 http://www.mulesource.org/schema/mule/jms/2.0/mule-jms.xsd
    http://www.mulesource.org/schema/mule/stdio/2.0 http://www.mulesource.org/schema/mule/stdio/2.0/mule-stdio.xsd
    http://www.mulesource.org/schema/mule/vm/2.0 http://www.mulesource.org/schema/mule/vm/2.0/mule-vm.xsd">
    <model name="JMSProba">
    <service name="JMS">
    <inbound>
    <jms:inbound-endpoint name="MyQueue" address="jms://queue:MyQueue" synchronous="false" connector-ref="jmsConnector"/>
    </inbound>
    <component class="disp.Dispatcher"/>
    <outbound>
    <outbound-pass-through-router>
    <jms:outbound-endpoint name="out" address="jms://queue:out" connector-ref="jmsConnector"/>
    </outbound-pass-through-router>
    </outbound>
    </service>
    </model>
    <jms:connector name="jmsConnector" connectionFactory-ref="openMQ"
    createMultipleTransactedReceivers="false"
    numberOfConcurrentTransactedReceivers="1" specification="1.1">
    <spring:property name="jmsSupport" ref="jndiJmsSupport" />
    </jms:connector>
    <spring:beans>
    <spring:bean name="jndiJmsSupport" class="org.mule.transport.jms.Jms102bSupport">
    <spring:constructor-arg ref="jmsConnector" />
    </spring:bean>
    <spring:bean name="context" class="javax.naming.InitialContext">
    <spring:constructor-arg type="java.util.Hashtable">
    <spring:props>
    <spring:prop key="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</spring:prop>
    <spring:prop key="java.naming.provider.url">file:///F:/Info/MessageQueue/mq</spring:prop>
    </spring:props>
    </spring:constructor-arg>
    </spring:bean>
    <spring:bean name="openMQ" class="org.springframework.jndi.JndiObjectFactoryBean">
    <spring:property name="jndiName" value="MyQueueConnectionFactory" />
    <spring:property name="jndiEnvironment">
    <spring:props>
    <spring:prop key="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</spring:prop>
    <spring:prop key="specifications">1.1</spring:prop>
    <spring:prop key="java.naming.provider.url">file:///C:/Temp</spring:prop>
    </spring:props>
    </spring:property>
    </spring:bean>
    </spring:beans>
    </mule>
    I get an exception:
    A Fatal error has occurred while the server was running:
    * Cannot convert value of type [javax.naming.Reference] to required type
    * [javax.jms.ConnectionFactory] for property 'connectionFactory': no matching
    * editors or conversion strategy found (java.lang.IllegalArgumentException)
    or in more details:
    ERROR 2008-07-27 18:30:52,953 [main] org.mule.config.builders.AbstractConfigurationBuilder: Configuration with "org.mule.config.spring.SpringXmlConfigurationBuilder" failed.
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'JMS': Cannot create inner bean '(inner bean)' of type [org.mule.routing.inbound.DefaultInboundRouterCollection] while setting bean property 'inboundRouter'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': Cannot create inner bean '(inner bean)' of type [org.mule.config.spring.factories.InboundEndpointFactoryBean] while setting bean property 'endpoints' with key [0]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jmsConnector': Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [javax.naming.Reference] to required type [javax.jms.ConnectionFactory] for property 'connectionFactory'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [javax.naming.Reference] to required type [javax.jms.ConnectionFactory] for property 'connectionFactory': no matching editors or conversion strategy found
    I have no idea why I get this exception :(
    There are a few things I don't understand in your config file though:
    -How come we don't have to specify the address where the broker is running? (like localhost:7676)
    -In the spring bean called "context", why do you specifiy the Open MQ installation directory? I mean, why do you have to specify it?
    -I don't understand everything in the file, so I might say something stupid now... but ... you have a spring bean called "context", and I don't see it used anywhere. Or do you have to have a spring bean called context? Or is this necessary at all?
    About the Open MQ admin tool. Am I correct in assuming, that all you have to do, is create a Broker (and add destination to it), and create an Object Store and add connection factory to it (and optionally physical destination too ... although this is not necessary because if you create a destination and you don't make physical destination for it, then by default it will be created for it). This is what I did with Open MQ tool.
    I was wondering if you could please give me your email address, would be easier to communicate over that, this forum seems a bit deserted anyway.
    Thanks in advance!
    Sziladi Zoltan

  • "file not found" error in VI Logger when running an imported VI Logger task

    I'm having trouble getting all of my imported tasks to run in VI Logger.
    I'm using VI Logger 2.0, with an SCXI-1100.  The data acquisition tasks I import run fine.  One of the VI Logger tasks I've imported runs OK, but the other one gives an error:
    "Engine Error!
    Error Code = 7
    LabView: File not found.  The file may have been moved or deleted, or the file path might be incorrectly formatted for the OS."
    If I create a new VI Logger task, the new task works OK.  And my 1st VI Logger task to be imported works OK.  So I know my setup mostly works.  But our company has never been able to figure out importing tasks very well, so I'm looking for help.
    Question 1: What file is LabView looking for?  I've checked my Export path and database path, and they are identical in both the working task and the non-working task.
    Question 2: Where else might I look for the difference between the 2 VI Logger tasks?  Neither task uses any weird characters in the name, both have analog input channels and calculated channels.  And I've mixed and matched the data acquisition tasks with no change (the good VI Logger task will run with either data task, the bad VI Logger task won't run with either data task), so I'm pretty sure the problem is in the VI Logger task, and not the data task.  I've also tried importing the 2nd task on its own, as a separate import function, but that gave the same error.
    Thanks for the help in advance,
    Jake

    Hi Spex,
    I can import the VI Logger task OK (from another PC running VI Logger 2.0), but I can't run the task.  I've attached the file.  The "1 channel" task runs OK.  The "x-probe" task starts, takes 1 or 2 data points, then hangs for a few seconds, then pops up the "file not found" error. The data acquisition taks run OK, so I don't think they're the problem.
    This file I've attached includes 2 good data tasks ("1 channel" and xprobe"), 1 good VI Logger task ("1 channel"), and my non-working VI Logger task ("x-probe", which works fine on the PC it was exported from).
    I have also tried exporting the non-working VI Logger task on its own, and it still imports OK, but still doesn't run, and gives the same error.
    Thanks,
    Jake
    Attachments:
    configData1.zip ‏353 KB

  • Any tips on managing reminders = tasks in ical?

    I have Palm on my Dell. I'm moving to MacbookPro. I got my contacts to Address Book using Vcard. I don't have tons of calendar entries so used the Vcal option to move them one at a time to iCal. Now I'm down to moving Palm tasks to my Mac. In terms of hardware I only have the Dell and the Mac. In terms of S/W I don't see much on the web that will really save me a lot in moving my 100-125 tasks to the Mac. So it will probably be re-enter them one at a time.
    Now the question - iCal seems to have only one place holder for what I call Tasks. That seems to be Reminders. Does anyone have any feedback, tips, etc. on doing task/reminder management using iCal?
    Thanks in advance.

    I haven't used iCal on Lion (10.7) yet, but on earlier version Tasks were assigned to calendars, and colour-coded accordingly. You could have special calendars for eg Urgent tasks, Work tasks etc. I suspect that remains the case in Lion's iCal.

  • A simple team task with a single team resource; yet in reality many actual team contributions - can it be done?

    What people would like to be able to do is to plan for a task in MSP to be done by a team. We plan how much work is involved and the duration.
    Any number of team members can be involved in the task. As things delay and get put on hold we may have entire teams jumping in to complete tasks so utilisation is maximised. PMs do not want to open an plan every half hour to add extra resources to
    plans
    Can this be done in MSP in an elegant way?
    What I am currently doing is asking PMs to assign all 20 generic team resources to a task; so any of the 20 team members can assign themselves to the task in the TimeSheet. (In fact they get templates with this setup: there is a join project team task with
    every generic team resource added so they can swap themselves into the plan)
    Obviously the PMs are aghast that Project server will not let you assign a single team resource and then allow anyone in the team to join in. Am I correct in thinking Project Server will not allow this.
    (Project server 2010 - on premise)
    Trying to stop the swap to Salesforce - but the errors keep a coming

    Yeah the capacity planning is mathematically fine if I know the cumulative total of work across a duration - project just flattens it across however many resources are assigned.
    To max utilisation resources can and will be shunted onto projects; in a week they can be put onto 20 projects to fill out the utilisation if a big project suffers a major delay. The PMs cannot spend the time rescheduling 10 plans. The resource want
    to easily record time sheeted data against the projects they worked on in the past days.
    Whilst "add to task" works and they all know how to use it; it only lists the projects they are a Published resource on. Even being reassigned onto a team task will not help. i.e. if I set up tasks on each and every template with the 20 spare generic
    team resources on; if a team resource grabs that 1 task to be assigned - the PM must
    1. accept the reassignment;
    2. Open and Publish the plan; only then can that resource add themselves to a task in the plan - PMs can't spend time doing this 20 times per project!
    I don't really want every resource seeing every project in the "add to task" list - but it looks like this is what I will have to do as the team tasks seems broken in many, many places.
    However we already do it for support projects; everyone (who may contribute at any point to a support item) is assigned to an initial task - this makes the project list unwieldy in Add task or new task and adds clutter for the user
    Trying to stop the swap to Salesforce - but the errors keep a coming

  • [Project Pro 2013] User Custom Field w/ Formula does not calculates on Group Tasks

    This issue happens only on Project 2013. When I make a formula (using Duration, Cost, Number, etc...) on a custom field and set "Use formula" option in "Calculation for task and group summary rows",
    works fine without groups, on resume tasks. But when I Group the tasks, the group tasks values disappear.
    When I open the same file on Project 2010, it works correctly.
    Thanks in advance.

    Thank you for the effort. This problem is a bit tricky and hard to explain without images or files.
    I'll try to explain using your example:
    - I do not coded my summary tasks with the value in the "Text1" field and
    - I do not want to maintain hierarchy on groups like you show.
    1. On "No Group" mode, the summary tasks works fine. All summary tasks shows the values calculated by the formula. Summary tasks are not the problem.
    2. However, when you group the tasks by Text1, the Group Tasks (the colored ones that appear on group mode: Group: a, b and c) do not show the values calculated by the formula, even if you set to "use formula" on summary and group tasks.
    3. When you open the same file on Project 2010 (default configs), group tasks values appears (values calculated for the groups of the detail tasks: a, b, c). If your Duration1 field stays the same or disappear, change some values and refresh the field to check
    the formula is working.
    4. If you see that everything are working correctly so far, try this on your file:
    4.1. Maintain your tasks grouped by Text1.
    4.2. Add three Fields: "Duration2", "Duration3" and "Duration4".
    4.3. On every detail tasks, enter 2 days for Duration2, 3 days for Duration3.
    4.4 Set Duration2 and Duration3 to "show Average" value on the resume and group tasks. Set the Duration4 to "use formula".
    4.5. Create on Duration4 field a simple formula, like [Duration2]+[Duration3], or [Duration2]/[Duration3].
    4.6. See that the formula works fine the detail tasks (Duration4 = 5 days).
    4.7. Now the problem: The group tasks shows nothing. Duration4 should be calculated (average duration2: 2 days + avg duration3: 3 days = 5 days).
    4.8. Save the file and open on Project 2010. If the resume tasks on Duration4 continues to show nothing. Refresh the field by clicking right button on field and going to "Custom field" and pressing OK.
    See that Duration4 are calculated on Project 2010 correctly, but not on 2013 version. That is the problem
    Thanks again. I hope it helps.

Maybe you are looking for