Having an "execution global"

FileGlobals are fine but -as I read from a previous post- after each execution that changes them, the file itself is modified, which is not very desirable for me.
So there is a run-time copy and a static copy. Run-time copy is generated from the static copy at the beginning of an execution, and at the end, the (probably modifed) run-time values are written back to the static copy, which resides on the hard drive. Therefore file is overwritten and its date and time properties (in Windows) are modified.
Can't we still have some variables that :
are initialized to the values assigned in the editor at the beginning of an execution,
are available throughout the execution,
are simply discarded at the end of the execution (after the report is generated for example), and
are not carried to the next execution (cleans its own mess)
Actually, the only thing we need to do to have a clean start is to initialize them at the beginning and that is not much of a big problem, I can live with that.
But my major concern is the file being modified.
S. Eren BALCI
www.aselsan.com.tr

Hi,
[So there is a run-time copy and a static copy. Run-time copy is generated from the static copy at the beginning of an execution, and at the end, the (probably modifed) run-time values are written back to the static copy, which resides on the hard drive. Therefore file is overwritten and its date and time properties (in Windows) are modified.]
There is a static copy and a Run-time copy. Correct so far. Under normal operation the runtime copy does not over write the static copy and therefore when the execution ends, the runtime copy is finished with. The static copy remains untouched.
What you may have been reading in other posts, is where others actually want to change the static copy. This you have to do by modifying the DefaultFileGlobals ( not the exact wording of this property) under program control, which you will not be doing.
I hope this helps
Regards
Ray Farmer
Regards
Ray Farmer

Similar Messages

  • Having trouble with global extension installation on Firefox 3.6.3

    I am having trouble installing extensions globally on Firefox 3.6.3. As you know the -install-global-extension switch has been removed and you cannot point to the XPI in the registry any more. What I have done to get the extension ID and confirm it has unpacked successfully is to install the extensions as me into my own profile, which works fine and the extensions launch successfully (PROVING THEY ARE COMPATIBLE), then closing Firefox and cutting the folder from %appdata%\Mozilla\Firefox\Profiles\4sd8sico.default\extensions and pasting into %programfiles%\Mozilla Firefox\extensions. However when I do this the extension fails to launch, and when I go into the Addons manager it always says "Not compatible with Firefox 3.6.3".
    Any ideas please? I have tried this with both Keyscrambler and Public Fox and they both do the same thing.
    Many thanks
    == This happened ==
    Every time Firefox opened

    You'll need to unpack the xpi package and modify the install.rdf file in it. Look for em:maxVersion and change the version number to whatever you have installed to make it work.
    I was having the same issue with the ireader addon. It would install fine in the local profile but kept getting flagged incompatible whenever I moved it to the global extensions directory.
    Unpacking the package and changing the em:maxVersion attribute in install.rdf to 9.0 fixed the issue for me.

  • Accessing Global variable which is in another script file

    Hi,
    I have 2 scripts as File1.scpt and File2.scpt in desktop. I am in need of having the common global variable 'GG'.
    File1.scpt has only the following code
    *global GG*
    *Set GG to "I am global"*
    File2.scpt has
    +set file1 to (load script file "Macintosh HD:Users:mowri:Desktop:File1.scpt")+
    +display dialog m+ -- not working -- Error message: "The variable m is not defined"
    +display dialog m+ of file1 - not working -- Error message: "Can't make m into type string"
    It will be great help if you can clarify about my mistakes in this?
    Regards
    Mowri

    Another solution, shorter than the previous one:
    File1.scpt
    set GG to "I am global"
    File2.scpt
    set file1 to (load script file "Macintosh HD:Users:mowri:Desktop:File1.scpt")
    run file1
    display dialog GG
    Ref.: [Handlers in Script Applications - run Handlers|http://developer.apple.com/mac/library/documentation/AppleScript/Conce ptual/AppleScriptLangGuide/conceptual/ASLRabout_handlers.html#//appleref/doc/uid/TP40000983-CH206-SW14]
    Message was edited by: Pierre L.

  • How to get the all instances in which I acted on any of the activity?

    Hi all,
    Consider that in a Process there are three Activities named
    1. Create Proposal Role : Initiator
    2. Routed to Primary Owner Approval Role : Primary Owner
    3. Routed to Manager Approval Role : Manager
    Suppose I'm a Participant having access to PrimaryOwner Role. Some X, Y, Z persons created 3 instances whcih is now in "Routed to Manager Approval" activity or completed. I acted only on 2 instances and the other some one else acted. How can i get the 2 instances in which i acted . What is the way to acheive this PAPI 6.0 or Stuio 6.0?
    Thanks in advance,
    Sana

    Doing something similar, using following code in a screenflow called from a global activity:
    ps = new ProcessService();
    ps.connectTo(url :Fuego.Server.directoryURL, user : "username", password : "password");
    InstanceFilter filter = ps.getFilterFor(viewId : "myHidView");
    filter.searchScope = SearchScope(participantScope : ParticipantScope.ALL, statusScope : StatusScope.ONLY_INPROCESS);
    // have tried the filter using both methods below
    //filter.setParametricValueTo(variable : "myvar", value : myDesiredVarValue);
    filter.addAttributeTo(variable : "myvar", comparator : Comparison.IS, value : myDesiredVarValue);
    instances = ps.getInstancesByFilter(filter : filter);
    I get the following error in the engine log:
    Unable to receive the message because of a serialization error. Caused by: fuegoblock.papi.Instance fuego.rmi.spi.SerializationException: Unable to receive the message because of a serialization error. at fuego.rmi.spi.BaseConnection.send(BaseConnection.java:101) at fuego.rmi.ServerCluster.send(ServerCluster.java:226) at fuego.rmi.ServerCluster.sendResult(ServerCluster.java:495) at fuego.rmi.ServerCluster.access$400(ServerCluster.java:50) at fuego.rmi.ServerCluster$1.put(ServerCluster.java:590) at fuego.component.ExecutionThread.sendResult(ExecutionThread.java:523) at fuego.component.ExecutionThreadContext.doClientInvoke(ExecutionThreadContext.java:668) at fuego.component.ClientRemoteComponent.doInvocation(ClientRemoteComponent.java:303) at fuego.component.ClientRemoteComponent.invoke(ClientRemoteComponent.java:160) at fuego.component.ExecutionRelayedThrowable.execute(ExecutionRelayedThrowable.java:94) at fuego.server.execution.TaskExecution.handleExecutionRelayedThrowable(TaskExecution.java:802) at fuego.server.execution.TaskExecution.handleComponentExecutionException(TaskExecution.java:753) at fuego.server.execution.TaskExecution.executeCIL(TaskExecution.java:493) at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:677) at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:638) at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:451) at fuego.server.execution.GlobalTaskExecution.executeGlobalCIL(GlobalTaskExecution.java:164) at fuego.server.execution.Global.continueCil(Global.java:68) at fuego.server.AbstractProcessBean$39.execute(AbstractProcessBean.java:2515) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:291) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:462) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:540) at fuego.transaction.TransactionAction.start(TransactionAction.java:213) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:118) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.server.AbstractProcessBean.runGlobalActivity(AbstractProcessBean.java:2508) at sun.reflect.GeneratedMethodAccessor114.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at fuego.lang.JavaClass.invokeMethod(JavaClass.java:1477) at fuego.lang.JavaObject.invoke(JavaObject.java:185) at fuego.component.Message.process(Message.java:585) at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:759) at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:734) at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:140) at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:132) at fuego.fengine.FEngineProcessBean.processBatch(FEngineProcessBean.java:257) at fuego.component.ExecutionThread.work(ExecutionThread.java:818) at fuego.component.ExecutionThread.run(ExecutionThread.java:397) Caused by: java.io.NotSerializableException: fuegoblock.papi.Instance at java.io.ObjectOutputStream.writeObject0(Unknown Source) at java.io.ObjectOutputStream.writeObject(Unknown Source) at java.util.ArrayList.writeObject(Unknown Source) at sun.reflect.GeneratedMethodAccessor87.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at java.io.ObjectStreamClass.invokeWriteObject(Unknown Source) at java.io.ObjectOutputStream.writeSerialData(Unknown Source) at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source) at java.io.ObjectOutputStream.writeObject0(Unknown Source) at java.io.ObjectOutputStream.defaultWriteFields(Unknown Source) at java.io.ObjectOutputStream.writeSerialData(Unknown Source) at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source) at java.io.ObjectOutputStream.writeObject0(Unknown Source) at java.io.ObjectOutputStream.writeObject(Unknown Source) at fuego.component.Message.writeObject(Message.java:665) at sun.reflect.GeneratedMethodAccessor93.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at java.io.ObjectStreamClass.invokeWriteObject(Unknown Source) at java.io.ObjectOutputStream.writeSerialData(Unknown Source) at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source) at java.io.ObjectOutputStream.writeObject0(Unknown Source) at java.io.ObjectOutputStream.writeObject(Unknown Source) at fuego.component.Batch.writeObject(Batch.java:151) at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at java.io.ObjectStreamClass.invokeWriteObject(Unknown Source) at java.io.ObjectOutputStream.writeSerialData(Unknown Source) at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source) at java.io.ObjectOutputStream.writeObject0(Unknown Source) at java.io.ObjectOutputStream.writeObject(Unknown Source) at fuego.rmi.Packet.write(Packet.java:251) at fuego.rmi.spi.BaseConnection.send(BaseConnection.java:98) ... 38 more
    Sorry for huge error message.
    Is there a trick to getting the filtered set of instances from in the screenflow activity?

  • How to append data in xml

    I have below script which will create xml with data.
    ======================================================================
    $xmlPath = "D:\Users\admin\Desktop\Report.xml"
    $date = Get-Date -UFormat %m/%d/%Y
    if ( ! ( Test-Path $xmlPath ) )
        # Create The XML
        $global:xmlWriter = New-Object System.XMl.XmlTextWriter($xmlPath,$Null)
        $global:xmlWriter.Formatting = "Indented"
        $global:xmlWriter.Indentation = "4"
        $global:xmlWriter.WriteStartDocument()
        $global:xmlWriter.WriteStartElement("Execution")
        $global:xmlWriter.WriteStartElement("ExecutedOn")
        $global:xmlWriter.WriteAttributeString("Date",$date)
        $global:xmlWriter.WriteStartElement("Environments")
        Foreach( $c in $cEnvironments)
            $global:xmlWriter.WriteStartElement($c.Environment)
            $global:xmlWriter.WriteAttributeString("Red",$c.Red)
            $global:xmlWriter.WriteAttributeString("Green",$c.Green)
            $global:xmlWriter.WriteAttributeString("Blue",$c.Blue)
            $global:xmlWriter.WriteEndElement() #end of $c.Environment
        $global:xmlWriter.WriteEndElement() #end Environments
        $global:xmlWriter.WriteStartElement("ClEnv")
        Foreach( $c1 in $clEnv)
            $global:xmlWriter.WriteStartElement($c1.Environment)
            $global:xmlWriter.WriteAttributeString("John",$c1.John)
            $global:xmlWriter.WriteAttributeString("Mike",$c1.Mike)
            $global:xmlWriter.WriteAttributeString("Alex",$c1.Alex)
            $global:xmlWriter.WriteEndElement() #end of $c1.Environment
        $global:xmlWriter.WriteEndElement() #end ExecutedOn
        $global:xmlWriter.WriteEndElement() #end execution
        $global:xmlWriter.WriteEndDocument() #end document
        $global:xmlWriter.Finalize
        $global:xmlWriter.Flush()
        $global:xmlWriter.Close()
    else
        here I need to append the data in existing xml with the same above info but with different value
    $cEnvironments $clEnv variables are array and having related data. Now I need to re run the script next day and check if the file is already exist. this is I am doing with if command. if it is already exist then using "else" I need to append the data
    in existing xml with the same above info but with different values under section "Execution" like given below
    ======================================================================
    <?xml version="1.0"?>
    <?xml-stylesheet type='text/xsl' href='style.xsl'?>
    <Execution>
        <ExecutionStarted Date="2/19/2014">
         <Environments1>
            <Colors Red="21" Blue="14" Green="18" />
        </Environments1>
        <Environments2>
            <Names John="21" Mike="14" Alex="18" />
        </Environments2>
    </ExecutionStarted>
    <ExecutionStarted Date="2/20/2014">
         <Environments1>
            <Colors Red="2" Blue="56" Green="76" />
            <Colors Cyan="31" Brown="32" Black="54" />
        </Environments1>
        <Environments2>
            <Names John="45" Mike="63" Alex="97" />
        </Environments2>
    </ExecutionStarted>
    </Execution>
    Thanks.

    I have below script which will create xml with data.
    ======================================================================
    $xmlPath = "D:\Users\admin\Desktop\Report.xml"
    $date = Get-Date -UFormat %m/%d/%Y
    if ( ! ( Test-Path $xmlPath ) )
        # Create The XML
        $global:xmlWriter = New-Object System.XMl.XmlTextWriter($xmlPath,$Null)
        $global:xmlWriter.Formatting = "Indented"
        $global:xmlWriter.Indentation = "4"
        $global:xmlWriter.WriteStartDocument()
        $global:xmlWriter.WriteStartElement("Execution")
        $global:xmlWriter.WriteStartElement("ExecutedOn")
        $global:xmlWriter.WriteAttributeString("Date",$date)
        $global:xmlWriter.WriteStartElement("Environments")
        Foreach( $c in $cEnvironments)
            $global:xmlWriter.WriteStartElement($c.Environment)
            $global:xmlWriter.WriteAttributeString("Red",$c.Red)
            $global:xmlWriter.WriteAttributeString("Green",$c.Green)
            $global:xmlWriter.WriteAttributeString("Blue",$c.Blue)
            $global:xmlWriter.WriteEndElement() #end of $c.Environment
        $global:xmlWriter.WriteEndElement() #end Environments
        $global:xmlWriter.WriteStartElement("ClEnv")
        Foreach( $c1 in $clEnv)
            $global:xmlWriter.WriteStartElement($c1.Environment)
            $global:xmlWriter.WriteAttributeString("John",$c1.John)
            $global:xmlWriter.WriteAttributeString("Mike",$c1.Mike)
            $global:xmlWriter.WriteAttributeString("Alex",$c1.Alex)
            $global:xmlWriter.WriteEndElement() #end of $c1.Environment
        $global:xmlWriter.WriteEndElement() #end ExecutedOn
        $global:xmlWriter.WriteEndElement() #end execution
        $global:xmlWriter.WriteEndDocument() #end document
        $global:xmlWriter.Finalize
        $global:xmlWriter.Flush()
        $global:xmlWriter.Close()
    else
        here I need to append the data in existing xml with the same above info but with different value
    $cEnvironments $clEnv variables are array and having related data. Now I need to re run the script next day and check if the file is already exist. this is I am doing with if command. if it is already exist then using "else" I need to append the data in existing
    xml with the same above info but with different values under section "Execution" like given below
    ======================================================================
    <?xml version="1.0"?>
    <?xml-stylesheet type='text/xsl' href='style.xsl'?>
    <Execution>
        <ExecutionStarted Date="2/19/2014">
         <Environments1>
            <Colors Red="21" Blue="14" Green="18" />
        </Environments1>
        <Environments2>
            <Names John="21" Mike="14" Alex="18" />
        </Environments2>
    </ExecutionStarted>
    <ExecutionStarted Date="2/20/2014">
         <Environments1>
            <Colors Red="2" Blue="56" Green="76" />
            <Colors Cyan="31" Brown="32" Black="54" />
        </Environments1>
        <Environments2>
            <Names John="45" Mike="63" Alex="97" />
        </Environments2>
    </ExecutionStarted>
    </Execution>
    Thanks.
    In my opinion, you are creating difficulties when you treat the problem as [xml] objects.
    A XML file is in fact, a TXT file, which can be interpreted in a specific way (XML way).
    But to this current problem, you can view the XML file as a common TXT file, with 1 important requirement: the closing main node ( </Execution> in your example ) must be in its own line, and must be the last line in the file.
    That said, you accomplish your task successfully, with this pretty much simple PS code:
    $xmlFile='.\XML.xml'
    $oldXmlText=@(gc $xmlFile)
    $newXmlText=$oldXmlText[0..($oldXmlText.Count-2)]+
    $inclusion+
    $oldXmlText[-1]
    Just compare this simplicity, with you original code's complexity.
    The XML.xml file content can be this:
    <?xml version="1.0"?>
    <?xml-stylesheet type='text/xsl' href='style.xsl'?>
    <Execution>
        <ExecutionStarted Date="1">
            <Environments1>
                <Colors Red="21" Yellow="14" Green="18" />
            </Environments1>
            <Environments2>
                <Names John="21" Mike="14" Alex="18" />
            </Environments2>
        </ExecutionStarted>
    </Execution>
    Notice the very last line.
    And the variable $inclusion can be this:
    $inclusion=@'
       <ExecutionStarted Date="2">
            <Environments3>
                <Names blue="21" black="14" cyan="18" />
            </Environments3>
       </ExecutionStarted>
    My Friend Come See How Abject Repugnant Politics... r.

  • Understanding the AWR report

    Hello,
    Just to start off on the right path I would like you to know that I am a Java developer trying to understand the AWR report. To give a quick overview of my problem :
    I have built a load test framework using JMeter and trying to send SOAP requests to my weblogic server. Each of these requests are getting converted multiple Insert, Update and Merge statements and getting executed on the Oracle 10g productions grade DB server. When I run the AWR report, under the "SQL ordered by Executions (Global)" I see statements that have run for 2 billion times. The JDBC connection to the database is configured to have a maximum of 40 connections and I do not see all of them being used up. The issue now is I am NOT generating that kind of load yet. I am creating around 15000 SOAP requests in an hour and I am expecting around 1million records to hit the database. The test runs fine for a couple of hours and then the server starts failing because the database is not responding back properly. When I run the statistics query on tables "gv$session s, gv$sqlarea t, gv$process p" to get the pending sessions in the database I have seen anywhere between 30 - 62 pending sessions with a activity time of more than 300 minutes.
    I am sure I am not sending in 2 billion requests from the LoadTest env that I have developed but the AWR report says so. I want to know if there is a possible reason for this behavior. The stuck threads start occurring on the Weblogic server after 30 mins I start the test. Below is the exception I got on weblogic just in case it helps
    2014-10-06 19:26:04,960[[STUCK] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)']ERROR DAOUtil -- DAOUtil@SQLException > weblogic.jdbc.extensions.ConnectionDeadSQLException: weblogic.common.resourcepool.ResourceDeadException: Could not create pool connection. The DBMS driver exception was: Closed Connection
        at weblogic.jdbc.common.internal.JDBCUtil.wrapAndThrowResourceException(JDBCUtil.java:249)
        at weblogic.jdbc.pool.Driver.connect(Driver.java:160)
        at weblogic.jdbc.jts.Driver.getNonTxConnection(Driver.java:642)
        at weblogic.jdbc.jts.Driver.connect(Driver.java:124)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
        at com.bci.rms.ea.common.eautil.dao.DAOUtil.getConnectionFromDataSource(DAOUtil.java:222)
    Looking forward for reply/questions...
    Thanks in Advance,
    Sameer.

    Hello,
    Just to start off on the right path I would like you to know that I am a Java developer trying to understand the AWR report. To give a quick overview of my problem :
    I have built a load test framework using JMeter and trying to send SOAP requests to my weblogic server. Each of these requests are getting converted multiple Insert, Update and Merge statements and getting executed on the Oracle 10g productions grade DB server. When I run the AWR report, under the "SQL ordered by Executions (Global)" I see statements that have run for 2 billion times. The JDBC connection to the database is configured to have a maximum of 40 connections and I do not see all of them being used up. The issue now is I am NOT generating that kind of load yet. I am creating around 15000 SOAP requests in an hour and I am expecting around 1million records to hit the database. The test runs fine for a couple of hours and then the server starts failing because the database is not responding back properly. When I run the statistics query on tables "gv$session s, gv$sqlarea t, gv$process p" to get the pending sessions in the database I have seen anywhere between 30 - 62 pending sessions with a activity time of more than 300 minutes.
    I am sure I am not sending in 2 billion requests from the LoadTest env that I have developed but the AWR report says so. I want to know if there is a possible reason for this behavior. The stuck threads start occurring on the Weblogic server after 30 mins I start the test. Below is the exception I got on weblogic just in case it helps
    2014-10-06 19:26:04,960[[STUCK] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)']ERROR DAOUtil -- DAOUtil@SQLException > weblogic.jdbc.extensions.ConnectionDeadSQLException: weblogic.common.resourcepool.ResourceDeadException: Could not create pool connection. The DBMS driver exception was: Closed Connection
        at weblogic.jdbc.common.internal.JDBCUtil.wrapAndThrowResourceException(JDBCUtil.java:249)
        at weblogic.jdbc.pool.Driver.connect(Driver.java:160)
        at weblogic.jdbc.jts.Driver.getNonTxConnection(Driver.java:642)
        at weblogic.jdbc.jts.Driver.connect(Driver.java:124)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:338)
        at com.bci.rms.ea.common.eautil.dao.DAOUtil.getConnectionFromDataSource(DAOUtil.java:222)
    Looking forward for reply/questions...
    Thanks in Advance,
    Sameer.

  • Issue with Deploying and calling a BPEL process on ALBPM enterprise server

    Hi,
    I am trying to put in place a POC using Aqualogic BEA products (using ALBPM 5.7, Enterprise Server 5.7, ALSB 2.6, Weblogic App server 9.2). My goal is to put in place a simple BPEL process (using ALBPM) which would call a webservice exposed through ALSB. This BPEL process is initiated by a wrapper BPMN process calling the BPEL process through fuego code.
    Though we are able to do the above on a standalone ALBPM studio, When we try to deploy the exported BPM project on the enterprise server and access it through the hiper workspace portal we are getting the following error in the BPM Process Administrator Log of the Engine.
    A component failed while executing activity '/Process#Default-1.0/Global' (BP-method Global). Details: The task could not be successfully executed. Reason: 'fuego.connector.ConnectorException: The configuration name [ProcessService] and type [Web Service] is not defined. Detail:The connector must be configured in the appropiate context. '. Caused by: The configuration name [ProcessService] and type [Web Service] is not defined. Detail:The connector must be configured in the appropiate context. fuego.lang.ComponentExecutionException: The task could not be successfully executed. Reason: 'fuego.connector.ConnectorException: The configuration name [ProcessService] and type [Web Service] is not defined. Detail:The connector must be configured in the appropiate context. '. at fuego.server.execution.EngineExecutionContext.invokeMethodAsCil(EngineExecutionContext.java:916) at fuego.server.execution.EngineExecutionContext.runCil(EngineExecutionContext.java:1068) at fuego.server.execution.TaskExecution.invoke(TaskExecution.java:389) at fuego.server.execution.GlobalTaskExecution.invoke(GlobalTaskExecution.java:106) at fuego.server.execution.TaskExecution.executeCIL(TaskExecution.java:481) at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:655) at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:616) at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:442) at fuego.server.execution.GlobalTaskExecution.executeGlobalCIL(GlobalTaskExecution.java:164) at fuego.server.execution.GlobalTaskExecution.executeGlobalCIL(GlobalTaskExecution.java:142) at fuego.server.execution.Global.execute(Global.java:81) at fuego.server.AbstractProcessBean$38.execute(AbstractProcessBean.java:2496) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:290) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:462) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:540) at fuego.transaction.TransactionAction.start(TransactionAction.java:213) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:117) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.server.AbstractProcessBean.runGlobalActivity(AbstractProcessBean.java:2491) at fuego.ejbengine.EJBProcessControlAdapter.runGlobalActivity(EJBProcessControlAdapter.java:386) at fuego.ejbengine.EJBProcessControlAdapter_hu750h_EOImpl.runGlobalActivity(EJBProcessControlAdapter_hu750h_EOImpl.java:2877) at fuego.ejbengine.EJBProcessControlAdapter_hu750h_EOImpl_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:174) at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:335) at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:252) at fuego.ejbengine.EJBProcessControlAdapter_hu750h_EOImpl_921_WLStub.runGlobalActivity(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at fuego.papi.impl.AbstractProcessControlHandler.invokeInternal(AbstractProcessControlHandler.java:48) at fuego.papi.impl.j2ee.EJBProcessControlHandler.doInvoke(EJBProcessControlHandler.java:111) at fuego.papi.impl.j2ee.EJBProcessControlHandler.invoke(EJBProcessControlHandler.java:66) at $Proxy77.runGlobalActivity(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at fuego.lang.JavaClass.invokeMethod(JavaClass.java:1478) at fuego.lang.JavaObject.invoke(JavaObject.java:185) at fuego.papi.impl.j2ee.EJBExecution.next(EJBExecution.java:200) at fuego.portal.wapi.InteractiveExecution.process(InteractiveExecution.java:157) at fuego.portal.wapi.WebInteractiveExecution.process(WebInteractiveExecution.java:54) at fuego.portal.wapi.InteractiveExecution.process(InteractiveExecution.java:200) at fuego.portal.servlet.ExecutionDispatcher.runGlobalActivity(ExecutionDispatcher.java:659) at fuego.portal.servlet.ExecutionDispatcher.processRequest(ExecutionDispatcher.java:144) at fuego.portal.servlet.ExecutionDispatcher.doPost(ExecutionDispatcher.java:105) at javax.servlet.http.HttpServlet.service(HttpServlet.java:763) at fuego.portal.servlet.AuthenticatedWamServlet.service(AuthenticatedWamServlet.java:1049) at fuego.portal.servlet.SingleThreadPerSession.service(SingleThreadPerSession.java:73) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:223) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at fuego.servlet.multipart.BaseMultipartFilter.doFilter(BaseMultipartFilter.java:57) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3243) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2003) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:1909) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1359) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209) at weblogic.work.ExecuteThread.run(ExecuteThread.java:181) Caused by: fuego.connector.ConnectorException: The configuration name [ProcessService] and type [Web Service] is not defined. Detail:The connector must be configured in the appropiate context. at fuego.connector.ConnectorException.connectorNotFound(ConnectorException.java:55) at fuego.connector.ConnectorService.getConnectorInterface(ConnectorService.java:586) at fuego.connector.ConnectorTransaction.getConnectorInterface(ConnectorTransaction.java:618) at fuego.connector.ConnectorTransaction.getResource(ConnectorTransaction.java:254) at fuego.soaptype.WSConfiguration.getInstance(WSConfiguration.java:55) at fuego.soaptype.Endpoint.create(Endpoint.java:42) at fuego.soaptype.WebServiceInstantiator.instantiate(WebServiceInstantiator.java:58) at fuego.component.Component.instantiateDynamic(Component.java:123) at CapGemini.Process.Default_1_0.Instance.CIL_callBPEL(Instance.java:241) at CapGemini.Process.Default_1_0.Instance.CIL_callBPEL(Instance.java:307) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at fuego.server.execution.EngineExecutionContext.invokeMethodAsCil(EngineExecutionContext.java:907) ... 64 more
    Two things here that might help to understand the problem better :
    1) As I understand the error is due to some issue while calling the BPEL process from the Fuego code.
    The Fuego code which call's the BPEL from the BPMN process is as follows :
    helloResponse as String = "someoutput"
    helloRequest as String = "someinput"
    sessionid as String
    // Starting a Session.
    // In case you are testing this in the Studio,
    // the password must be the same as the user
    startSession BPELWSDL.ProcessServiceListenerWSDL.ProcessService
    using user = "test",
    password = "test"
    returning sessionid = result
    // createTest is the name of the operation
    // in the exposed process.
    // In this case the process "ExposedProcess"
    // has a web service operation
    // called "createTest" that is a "Process Creation" type
    // and uses the Begin activity's argument set
    callHello BPELWSDL.ProcessServiceListenerWSDL.ProcessService
    using sessionId = sessionid,
    arg1 = helloRequest
    returning helloResponse = result
    // Closing the session
    discardSession BPELWSDL.ProcessServiceListenerWSDL.ProcessService
    using sessionId = sessionid
    display "The BPEL's response : " + helloResponse
    2) Further, I had catalogued the BPEL's wsdl to the location http://localhost:9000/fuegoServices/ws/ProcessServiceListener?WSDL while running the same in Studio, but while deploying on the enterprise server , I made this point to http://localhost:7001/fuegoServices/ws/ProcessServiceListener?WSDL Is this correct ?
    Any thought on this issue would be appreciated.
    Regards
    Deepak

    Hi Deepak,
    We are also facing a similar problem, while accessing an external webservice from a BPM process.
    Were you able to resolve this issue.
    If so, Could you please let us know the procedure that you followed to resolve the issue.
    Thanks in Advance,
    Krishnaveni.

  • Clusters as data structures

    I am looking for the best and simplest way to create and manage data structures in Labview.
    Most often I use clusters as data structures, however the thing I don't like about this approach is that when I pass the cluster to a subroutine, the subroutine needs a local copy of the cluster.
    If I change the cluster later (say I add a new data member), then I need to go through all the subroutines with local copies of the cluster and make the edit of the new member (delete/save/relink to sub-vi, etc).
    On a few occasions in the past, I've tried NI GOOP, but I find the extra overhead associated with this approach cumbersome, I don't want to have to write 'get' and 'set' methods for every integer and string, I like being able to access the cluster/object data via the "unbundle by name" feature.
    Is there a simple or clever way of having a single global reference to a data object (say a cluster) that is shared by a group of subroutines and which can then be used as a template as an input or output parameter? I might guess the answer is no because Labview is interpreted and so the data object has to be passed as a handle, which I guess is how GOOP works, and I have the choice of putting in the extra energy up front (using GOOP) or later (using clusters if I have to edit the data structure). Would it be advisable to just use a data cluster as a global variable?
    I'm curious how other programmers handle this. Is GOOP pretty widely used? Is it the best approach for creating maintainable LV software ?
    Alex

    Alex,
    Encapsulation of data is critical to maintaining a large program. You need global, but restricted, access to your data structures. You need a method that guarantees serial, atomic access so that your exposure to race conditions is minimimized. Since LabVIEW is inherently multi-threaded, it is very easy to shoot yourself in the foot. I can feel your pain when you mention writing all those get and set VIs. However, I can tell you that it is far less painful than trying to debug a race condition. Making a LabVIEW object also forces you to think through your program structure ahead of time - not something we LabVIEW programmers are accustomed to doing, but very necessary for large program success. I have use three methods of data encapsulation.
    NI GOOP - You can get NI GOOP from the tutorial Graphical Object Oriented Programming (GOOP). It uses a code interface node to store the strict typedef data cluster. The wizard eases maintenance. Unfortunately, the code interface node forces you through the UI thread any time you access data, which dramatically slows performance (about an order of magnitude worse than the next couple of methods).
    Functional Globals - These are also called LV2 style globals or shift register globals. The zip file attached includes an NI-Week presentation on the basics of how to use this approach with an amusing example. The commercial Endevo GOOP toolkit now uses this method instead of the code interface node method.
    Single-Element Queues - The data is stored in a single element queue. You create the database by creating the queue and stuffing it with your data. A get function is implemented by popping the data from the queue, doing an unbundle by name, then pushing the data back into the queue. A set is done by popping the data from the queue, doing a bundle by name, then pushing the data back into the queue. You destroy the data by destroying the queue with a force destroy. By always pulling the element from the queue before doing any operation, you force any other caller to wait for the queue to have an element before executing. This serializes access to your database. I have just started using this approach and do not have a good example or lots of experience with it, but can post more info if you need it. Let me know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • What is the Smartnet service for the 2504 license upgrade LIC-CT2504-UPG ?

    Hi guys,
    Can some one explain me what is this service for ? I have the LIC-CT2504-UPG, with the LIC-CT2504-5A (5 AP licenses adder) and I don't understeand what are the SMARTNet services for this licenses ?
    Are they needed if I already have the SMARTNet service for the hardware controller ?
    Do these services provide something additional on top of the hardware service ?
    Thank you.

    HI,
    Smartnet Service Contract:
    Reduce downtime with fast, expert technical support, flexible hardware coverage, and smart, proactive device diagnostics with SMARTnet Service. Your IT staff has anytime access to Cisco engineers in the Technical Assistance Center (TAC) and an extensive range of resources, tools and training.
    What You Get by having this contract :
    Global 24-hour access to experts in the Cisco Technical Assistance Center (TAC)
    Self-help support through online communities, resources, and tools
    Hardware replacement options, including 2-hour, 4-hour and next business day
    software updates and download from cisco website fro your controller.
    Hope you got the point :)
    Regards
    Dont forget to rate helpful posts

  • Alternate Boot Environment disaster

    Hi - hoping someone can help me with a small disaster I've just had in trying to patch one of my SPARC T4-1 servers running zones using the patch ABE method. The patching appeared to work perfectly well, I ran the following commands:
    sudo su -
    zlogin tdukihstestz01 shutdown -y -g0 -i 0
    zlogin tdukihstestz02 shutdown -y -g0 -i 0
    zlogin tdukbackupz01 shutdown -y -g0 -i 0
    lucreate -n CPU_2013-01
    mkdir /tdukwbadm01
    mount -F nfs tdukwbadm01:/export/jumpstart/Patches/Solaris10/10_Recommended_CPU_2013-01 /tdukwbadm01/
    cd /tdukwbadm01/
    ./installpatchset apply-prereq s10patchset
    nohup ./installpatchset -B CPU_2013-01 --s10patchset
    luactivate CPU_2013-01
    lustatus
    init 6
    However when the server came back up only 1 zone would start - tdukbackupz01.
    The other two zones were in the installed state although they are set to autoboot. The ONLY difference between the zones is that for the two that won't start I had added a "fs" by doing this:
    zonepath: /export/zones/tdukihstestz01
    fs:
    special: /export/zones/tdukihstestz01/logs
    So I actually made /logs a folder under the zonepath - and it appears after patching the ABE this doesn't exist so the zone won't start. In fact /export/zones/tdukihstestz01-CPU_2013-01/ is completely empty now. So I can only assume that by having /logs inside the zones file system has caused this problem.
    So after a bit of manual intervention I have my zones running again - basically I edited the zones xml files and the index file in /etc/zones and removed the references to CPU_2013-01 which has done the trick.
    However my ZFS looks a bit of a mess. It now looks like this:
    root@tdukunxtest01:~ 503$ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    archives 42.8G 504G 42.8G /archives
    rpool 126G 421G 106K /rpool
    rpool/ROOT 5.48G 421G 31K legacy
    rpool/ROOT/CPU_2013-01 5.38G 421G 3.60G /
    rpool/ROOT/CPU_2013-01@CPU_2013-01 592M - 3.60G -
    rpool/ROOT/CPU_2013-01/var 1.21G 421G 1.19G /var
    rpool/ROOT/CPU_2013-01/var@CPU_2013-01 14.4M - 659M -
    rpool/ROOT/Solaris10 96.9M 421G 3.60G /.alt.Solaris10
    rpool/ROOT/Solaris10/var 22.2M 421G 671M /.alt.Solaris10/var
    rpool/dump 32.0G 421G 32.0G -
    rpool/export 17.9G 421G 35K /export
    rpool/export/home 1.01G 31.0G 1.01G /export/home
    rpool/export/zones 16.9G 421G 35K /export/zones
    rpool/export/zones/tdukbackupz01 41.8M 421G 3.14G /export/zones/tdukbackupz01
    rpool/export/zones/tdukbackupz01-Solaris10 3.14G 96.9G 3.13G /export/zones/tdukbackupz01-Solaris10
    rpool/export/zones/tdukbackupz01-Solaris10@CPU_2013-01 1.80M - 3.13G -
    rpool/export/zones/tdukihstestz01 43.3M 421G 10.1G /export/zones/tdukihstestz01
    rpool/export/zones/tdukihstestz01-Solaris10 10.2G 21.8G 10.2G /export/zones/tdukihstestz01-Solaris10
    rpool/export/zones/tdukihstestz01-Solaris10@CPU_2013-01 2.28M - 10.2G -
    rpool/export/zones/tdukihstestz02 35.3M 421G 3.37G /export/zones/tdukihstestz02
    rpool/export/zones/tdukihstestz02-Solaris10 3.40G 28.6G 3.40G /export/zones/tdukihstestz02-Solaris10
    rpool/export/zones/tdukihstestz02-Solaris10@CPU_2013-01 1.66M - 3.40G -
    rpool/logs 5.10G 26.9G 5.10G /logs
    rpool/swap 66.0G 423G 64.0G -
    Whereas previously it look more like this:
    NAME USED AVAIL REFER MOUNTPOINT
    archives 42.8G 504G 42.8G /archives
    rpool 126G 421G 106K /rpool
    rpool/ROOT 5.48G 421G 31K legacy
    rpool/dump 32.0G 421G 32.0G -
    rpool/export 17.9G 421G 35K /export
    rpool/export/home 1.01G 31.0G 1.01G /export/home
    rpool/export/zones 16.9G 421G 35K /export/zones
    rpool/export/zones/tdukbackupz01 41.8M 421G 3.14G /export/zones/tdukbackupz01
    rpool/export/zones/tdukihstestz01 43.3M 421G 10.1G /export/zones/tdukihstestz01
    rpool/export/zones/tdukihstestz02 35.3M 421G 3.37G /export/zones/tdukihstestz02
    rpool/logs 5.10G 26.9G 5.10G /logs
    rpool/swap 66.0G 423G 64.0G -
    Does anyone know how to fix my file system mess and is having a non-global zones /logs inside the actual zones zonepath is a bad idea - it would appear it is.
    Thanks - Julian.

    Ok, got a little further with this. I do now think that I can track down the start of my problems was due to me defining a filesystem within a non-global zone that was actually inside the zonepath itself - having looked at the Solaris zones documentation there's nothing to stop you doing this, just that it's a bad idea. So I've amended ALL my non-global zones to NOT do this anymore and checked.
    Taking a single non-global zone I can see that ZFS did the following when I ran the lucreate command:
    2013-02-17.07:39:58 zfs snapshot rpool/export/zones/tdukihstestz01@CPU_2013-01
    2013-02-17.07:39:58 zfs clone rpool/export/zones/tdukihstestz01@CPU_2013-01 rpool/export/zones/tdukihstestz01-CPU_2013-01
    2013-02-17.07:39:58 zfs set zoned=off rpool/export/zones/tdukihstestz01-CPU_2013-01
    So a snapshop / clone was taken. There is then a series of zfs canmount=on and zfs canmount=off commands seen against rpool/export/zones/tdukihstestz01-CPU_2013-01 - I'm not entirely sure what these are doing, well I know what the command does just not why its doing it.
    The patch process finished at 08:46 and I rebooted the server with an init 6 a little time after this. I then see a few more canmount commands and then:
    2013-02-17.08:49:22 zfs rename rpool/export/zones/tdukihstestz01 rpool/export/zones/tdukihstestz01-Solaris10
    And then a load more canmount commands against rpool/export/zones/tdukihstestz01-Solaris10 but also the following is shown:
    2013-02-17.08:54:31 zfs rename rpool/export/zones/tdukihstestz01-CPU_2013-01 rpool/export/zones/tdukihstestz01
    Now my memory is a little fuzzy over what happened next but the failure of the non-global zone to boot was because <zonepath>/logs/ did not exist - and this takes me back to my point above about defining a file system within the <zonepath> - when I tried to start the zone tdukihstestz01 it complained that /logs did not exist. It did exist in the zone on the old Boot Environment but NOT the new one. And when I actually created these zones several months ago I can remember I had to manually create these BEFORE I ran the initial sudo zoneadm -z tdukihstestz01 boot command.
    So basically I'm 99.9% sure that I know what I did wrong to cause this for the non-global zones and I can only assume this has had a knock on effect with the root environment. To fix a non-global zone I ran the following commands earlier today.
    zfs list |grep tdukihstestz02
    rpool/export/zones/tdukihstestz02 81.1M 421G 3.41G /export/zones/tdukihstestz02 <-- clone
    rpool/export/zones/tdukihstestz02-Solaris10 3.40G 28.6G 3.40G /export/zones/tdukihstestz02-Solaris10
    rpool/export/zones/tdukihstestz02-Solaris10@CPU_2013-01 1.66M - 3.40G - <-- snapshot
    zlogin tdukihstestz02
    init 5
    zfs destroy -R rpool/export/zones/tdukihstestz02-Solaris10@CPU_2013-01
    zfs list |grep tdukihstestz02
    rpool/export/zones/tdukihstestz02-Solaris10 3.40G 28.6G 3.40G /export/zones/tdukihstestz02-Solaris10
    zfs rename rpool/export/zones/tdukihstestz02-Solaris10 rpool/export/zones/tdukihstestz02
    zfs set canmount=on rpool/export/zones/tdukihstestz02
    zfs mount rpool/export/zones/tdukihstestz02
    I also see that 81.1M of space used in rpool/export/zones/tdukihstestz01 must refer to changes between the original file system and the clone ... I think. These will only have been log files so I'm not to bothered ... again I think, well actually hope.
    So I'm sort of almost sorted, there is the small matter of the root file system - which tbh I won't be so gung ho' in my approach to fixing. But again if anyone has any ideas on this I'd love to hear them.
    Thanks - Julian.

  • Standard message type limitation on company code

    Hi all,
    currently I'm implementing ale distribution with the standard message type
    FIDCC1, i'm filling idocs in my source code and run function master_idoc_distribute
    now i need to set up a distribution model, but when i do so, the transactions to make
    documents complains that for certain company codes there is no global company code
    although those codes aren't used for the distribution. Is there a way/place where i
    can limit the use to certain company codes, without having to make global cc for
    all the companies in the system?
    or do i need something else then a distribution model to send the idocs?
    grtz,
    Koen

    you will get good answer if you post it in ABAP forum.

  • Saving Data Options in SmartView 11 1 2 2  300

    Hi
    I understand the Data Options in the Smart View 11 1 2 2 are not global and need to set for each sheet.
    my business users toggle between adhoc analysis and forms in the same worksheet.
    The data options are creating more work as it changes between tabs.
    For example assume that my user is in tab 1 using Planning web form and he disabled the suppress missing option and then he goes to tab2 and does the adhoc analysis where this options are disabled.
    when he comes back to tab 1 the options which he selected are gone. He has to disable them again.
    Is there anyway where I can save this data options on server level.
    Thanks
    Praveen.

    How funny, Having the options global was a huge pain point from switching from the Add-in to Smart View. People hated the options being global. The Options are sheet level now meaning they are kept on the sheet so if you have suppress missing on one sheet and go to another you can turn that off and the suppress missing will stat on the first sheet.
    If you want to create default formatting for any new sheet that is created, go into options and set the options to what you want new sheets to start as. Then instead of just hitting OK, drop down the little arrow next to the OK button and select save as default. Now any new sheet created will start with the options you selected.

  • 10g Query  tuning

    Hi
    Below query is executing in 10 Secs in my TEST env .THis is taking more than 5 min in my Production env .Both having same execution plans
    db is 10g -
    How can i improve the performance in Prod for this . What are all the points i have to lookin in new Oracle 10g database
    SELECT IND, COUNT(*) FROM Table1 WHERE IND IN ('A', 'G', 'C', 'E', 'M')
    GROUP BY IND ORDER BY IND
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=23185 Card=5 Bytes
    =20)
    1 0 SORT (GROUP BY) (Cost=23185 Card=5 Bytes=20)
    2 1 INDEX (FAST FULL SCAN) OF 'tab_pk_indx' (INDE
    X (UNIQUE)) (Cost=22718 Card=3919918 Bytes=15679672)

    Hi Alish,
    >>
    Well, you have many options:
    http://www.dba-oracle.com/art_sql_tune.htm
    Optimize the server kernel - You must always tune your disk and network I/O subsystem (RAID, DASD bandwidth, network) to optimize the I/O time, network packet size and dispatching frequency.
    Adjusting your optimizer statistics - You must always collect and store optimizer statistics to allow the optimizer to learn more about the distribution of your data to take more intelligent execution plans. Also, histograms can hypercharge SQL in cases of determining optimal table join order, and when making access decisions on skewed WHERE clause predicates.
    Adjust optimizer parameters - Optimizer optimizer_mode, optimizer_index_caching, optimizer_index_cost_adj.
    Optimize your instance - Your choice of db_block_size, db_cache_size, and OS parameters (db_file_multiblock_read_count, cpu_count, &c), can influence SQL performance.
    Tune your SQL Access workload with physical indexes and materialized views - Just as the 10g SQLAccess advisor recommends missing indexes and missing materialized views, you should always optimize your SQL workload with indexes, especially function-based indexes, a Godsend for SQL tuning.
    Hope this helps. . . .
    Donald K. Burleson
    Oracle Press author

  • Finder Date Format

    How can I change the date format in Finder??
    Now the dates appear like : November 15, 2005
    I want 11/15/05 or maybe 11/15/2005
    Or is this one of those many areas where someone at Apple decided they should tell us how our finder looks. Does anyone know if change the country setting from US to some other country would do this?

    This gets stranger.
    I changed the long date format to: mm/dd/yyyy
    (for example: 11/17/2005) and most of the date formats did change, Except that I have a comma after the date I can't get rid of. And the "date created" column when I display my desktop is using the full date. all other dates are in the long format I specified.
    The dates feature is flawed as well as poorly designed. I as a user should be able to specify how I want dates displayed without having to change global settings which can screw up other things.

  • Please help! error -61399, how to create custom vi for every input in project explorer

    Please help! I have been trying whole night but couldn't get through it.
    I am creating custom vi to simulate cRIO inputs on development computer. ( FPGA target>>execute vi on>> development computer>>custom vi) I then follow tutorial creating test benches:
    Tutorial: Creating Test Benches (FPGA Module)
    but when I run fpga vi I get error -61399, input item/node is not supported (input item/node is my input which I've added from project explorer).
    The closest I got in my research on why this error is occuring- I have not connected all inputs/outputs in project explorer to custom vi. I followed tutorial exactly step by step but still I couldn't get through it.
    Please help! Please help!
    In order to do further investigation, I converted custom vi to a state machine , please find attachment. Having highlight execution on, it clearly demonstrates that input in cRIO are not being selected as a result default cases execute.
    Please find attached modified custom vi with state machine, fpga vi, and original custom vi.
    Best regards 
    Ta
    Attachments:
    custom vi.vi ‏32 KB
    inverter.vi ‏16 KB
    original custom vi.vi ‏22 KB

    Solution:
    You will see this error if no Custom VI has been selected or the Custom VI has not been configured for every I/O item that you are using in your FPGA code???
    I think that's exactly where I'm stuck!
    How do I configure it for inputs??? My attachement show that I have done everything being asked in tutorial!
    Thanks!

Maybe you are looking for

  • Problem with printing the smrtform

    Hi experts,     I want to print material number through smartform. i have a internal table containing material number and its quantity. I want to print particular material number for number of  quantity times. it means if i have 3 rows in internal ta

  • How to re-register to new Apple ID

    I just registered my new iPad and inadvertently registered it under my wife's Apple ID. How can I designate a new Apple ID for it?

  • TS1398 my phone when connected to wifi wont download apps and internet but will when using 3G

    my iphone wont download my apps or internet when using wifi but will with 3G

  • Gl balance showing wrongly in Fs10n

    Dear all, We are using 6.0 version. I got one issue from my client in which one of GL balance open amount showing wrongly The  gl account open balance showing 8500/- but when we double click on that it showing 7200/-.we did not carry forword the bala

  • Reports Integration Issue with 1.0.2.1

    Hi All. I was able to get reports integration working in 1.0.2.0, but I get the following error in 1.0.2.1. : An unexpected error occurred: User-Defined Exception (WWC-43000) An unexpected error occurred: ORA-06502: PL/SQL: numeric or value error: ch