Artificial Neural Network: NaN from a calculation

Hi everyone,
I'm programming a small pattern recognition neural network at the moment but have ran into a small snag. I'm recieving a NaN result from a calculation and I don't know why.
    void calculateOutput()
        double preSigmoidOutput = 0d;
        //Find pre sigmoid output
        Iterator connectionsIterator = connections.iterator();
        while (connectionsIterator.hasNext())
            Connection connection = (Connection) connectionsIterator.next();
            preSigmoidOutput += connection.weight * connection.entryNode.output;
        //Perform Squash
        output = 1 / (1 + Math.log(preSigmoidOutput));
    }I think the problem is occuring at the "output = " line. I've already set a breakpoint and watched what was going on and basically preSigmoidOutput is usually a very small but long number (e.g. 0.05543464575674564) which then at the line "output = " produces a NaN result. Is that mathematical operation overflowing the double datatype? And if so how would I got about stopping this?
Thanks,
Chris

tsith wrote:
sabre150 wrote:
BlueWrath wrote:
Turns out this line:
double logVal = Math.log(-preSigmoidOutput);Causes a NaN result even though preSigmoidOutput is a valid double (in the case i'm examining now preSigmoidOutput is equal to 0.01067537271542014). Anyone know why i'm getting a NaN result from this code?I hope the value of 'preSigmoidOutput' is less than zero since if it is zero the log() is -infinity and if positive then you are taking the log of a negative number which is illegal.Flag on the play! OP loses 10 yards:-) That will teach me to skip the rest of a post when I know what is wrong!

Similar Messages

  • Using threads in a neural network

    Hello,
    I've written a neural network and I'm wondering how I could use threads in it's execution to 1) increase (more precisely achieve!) learning speed and 2) print out the current error value for the network so that I can see how it is working without using the de-bugger. Basically, i've read the Concurrency tutorial but I'm having trouble getting my head around how I can apply it to my network (must be one of those days!)
    I'll give a brief explanation of how i've implemented the NN to see if anybody can shed any light on how I should proceed (i.e. whether it can be threaded, what parts to thread etc.)
    The network consists of classes:
    Neuron - stores input values to be put into the network and performs the activation functions (just a mathematical operation)
    WeightMatrix - contains random weights in a 2-D array with methods for accessing and changing those weights based on output error
    Layer - simply an array that stores a collection of neurons
    InputPattern - stores the values in an array and target value of a pattern (e.g. for logical AND i would store in pattern[0] = 1; pattern [1] = 1; target = 1;)
    PatternSet - set of InputPatterns stored so that they can be input into the network for learning
    NeuralNetwork - the main class that I want to thread. This class contains multiple layers and multiple WeightMatrices (that connects the neurons in each layer). The learn algorithm then uses the methods of the previous classes to generate neuron inputs and ouputs and error values given a specific input. It uses a loop that iterates through as follows:
        public float learn(PatternSet p)
            InputPattern currentPattern = null;
            double netError=0f;
            float previousError=0f;
            float outputValue = 0f;
            float sum=0f;
            float wcv=0f;
            float output1=0f;
            float output2=0f;
            float currentError=0f;
            float multiply=0f;
            float outputError = 0f;
            float weight = 0f;
            int count;
            int setPosition=0;
            int setSize = p.getSetSize();
            Neuron outputNeuron = layers[getNumberOfLayers()-1].getNeuron(0);
            //execute learning loop and repeat until an acceptable error value is obtained
            do
                 //set input layer neuron values to pattern values
                currentPattern = p.getPattern(setPosition);
                for (int i=0; i<currentPattern.getPatternSize(); i++)
                    layers[0].getNeuron(i).setNeuronInput(currentPattern.getValue(i));
                currentError = layers[getNumberOfLayers()-1].getNeuron(0).getOutputError();
                for (int a=0; a<layers[getNumberOfLayers()-1].getNumberOfNeurons(); a++)
                    //set target value of output neuron
                    layers[getNumberOfLayers()-1].getNeuron(a).setTarget(currentPattern.getTarget());
                //iterates between weight layers - i.e. there will be a weight matrix between each layer of the NN
                for (int i=0; i<getNumberOfLayers()-1; i++)
                    for (int j=0; j<layers[i+1].getNumberOfNeurons(); j++)
                        sum =0f;
                        count=0;
                        for (int k=0; k<layers.getNumberOfNeurons(); k++)
    weight = weights[i].getWeight(k,j);
    outputValue = layers[i].getNeuron(count).getOutput();
    multiply = layers[i].getNeuron(count).getOutput() * (weights[i].getWeight(k,j));
    //add values
    sum = sum + multiply;
    count++;
    //check that all weighted neuron outputs have been completed
    if (count == layers[i].getNumberOfNeurons())
    //pass results to neuron
    layers[i+1].getNeuron(j).setNeuronInput(sum);
    //activate neuron
    layers[i+1].getNeuron(j).neuronActivation();
    //calculate output error of neuron for given input
    layers[i+1].getNeuron(j).calculateOutputError();
    //check that output layer has been reached and all neurons have been summed together
    if (i == getNumberOfLayers()-2 && count == layers[i].getNumberOfNeurons())
    outputError = layers[i+1].getNeuron(j).getOutputError();
    netError = layers[i+1].getNeuron(j).getNetError();
    for (int a=getNumberOfLayers()-1; a>0; a--)
    for (int b=0; b<layers[a-1].getNumberOfNeurons(); b++)
    for (int c=0; c<layers[a].getNumberOfNeurons(); c++)
    output1 = layers[a-1].getNeuron(b).getOutput();
    output2 = layers[a].getNeuron(c).getOutput();
    wcv = learningRate * (outputError) * output1 * output2 * (1-output2);
    weights[a-1].changeWeight(wcv, b, c);
    learningCycle++;
    if (setPosition < setSize-1)
    setPosition++;
    else
    setPosition=0;
    while (netError > acceptableError && learningCycle < 1000000000);
    return currentError;
    }At the moment the net doesn't seem to learn to an acceptable degree of accuracy, so I was looking to use threads to monitor it's error value change while I left it running just to ensure that it is working as intended (which it seems to be based on NetBeans debugger output). For the moment, all I'm aiming for is an output of the netError value of the NN at a particular time - would this be possible given my current implementation?
    Thanks for the help,
    Nick                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    For huge NN and a really multi-core CPU (reporting to OS as a multiple CPU's) one may benefit from having:
    - an example pump
    - a separate threads for calculation of forward and backwards propagation with in/out queues.
    Example pump pumps one forward example to each forward processing thread. It waits for them to complete. Then it reads their output and finds errors to backpropagate. It pumps errors to back-propagation threads. They finds weights correction but does not update weight matrix only are pushing them to the output temporary arrays. The example pump takes those corrections, combines them an updates weights.
    Redo from start.
    The rule of thumb for high-performance is - avoid locks. If must access data which are changing, make a copy of them in bulk operation, prepare bulk result and read-write in bulk operations.
    In this example a whole bunch of weight matrixes and states of neurons are such a kind of data. Each thread should use separate copy and the teaching pump should combine them together. This makes one to split data in two blocks - non-changing, common for all threads (the geometry of NN and weights) and changing, separate for each thread (weight correction, in/out of neurons).
    Avoid "new", "clone" and etc. for the preference of System.arraycopy on existing data.
    Regards,
    Tomasz Sztejka.

  • Open soruce code of Neural Network Model of the Cerebellum

    I would like to share with the community the code of a Neural Network Model of the Cerebellum (CNN). I have been using the CNN for studying the cerebellum and for adaptive robot control. The CNN was developed using Object Oriented Programming (OOP) and a customized Address Event Representation (AER) format. Using OOP and AER allows the construction and evaluation of CNN with more than 80 k neurons, and more than 400 k synaptic connections in real time. The code includes the tools for creating the network, connecting synapses, create the AER format, and a demo for controlling a Virtual Model of a FAN.
    The link to the Cerebellar Network: https://bitbucket.org/rdpinzonm/the-bicnn-model
    Some details of the architecture of the cerebellar model:
    In comparison with traditional ANN or RNN, the CNN has a very peculiar architecture with at least three layers (see below, Fig. 1). Inputs from the external world such as the position of the arms, legs, or sensors from a robot, are carried to the cerebellum via mossy fibers (mf). mfs are then processed in the input layer that includes Golgi (Go) and Granule cells (Gr). The ratio of Gr to mf is around 1000:1, whereas Go to Gr is 15000:1. Because of these numbers it has been proposed that the input layer of the cerebellum transform the input mfs into a sparse representation easing the work of the other layers. The second layer, the molecular layer that could be regarded as a hidden layer, includes Ba/St, Basket and Stellate cells. Their numbers are similar to Go, and their role is still a matter of debate.  The last layer, the output layer, includes Purkinje cells (Pk). There are around 150.000 Gr per one Pk. This is a remarkable feature because the Pk is the only output of the cerebellar cortex. The output of the cerebellar cortex will eventualy reach the motor centers to correct movements.  The CNN includes a plausible learning rule of the cerebellum at synapses between Gr and Pk. It works a an supervised anti-Hebbian rule or a anti-correlation rule in the following way: the teaching signal carrying the information about erroneous motions of the leg, arm, robot, etc, is conveyed by the climbing fiber (cf) to a single Pk. Then, the synaptic weights og Gr-Pk are decreased if there is both cf and Gr activity, whereas if there is not cf (i.e., error) the weights are increased. What this rule means, is that those Gr producing errors have their weight decreased, while those decreasing the error are promoted by increasing their weight. 
    Fig. 1. Neural Network Model of the Cerebellum. mf, Mossy fibers (inputs); Go, Golgi Cells; Gr, Granule cells; Ba/St, Basket and Stellate cells; Pk, Purkinje Cell (Sole output of the cerebellar cortex); cf, climbing fiber (teaching signal); pf, parallel fibers (synapses of pf-Pk are the only adjustable weights in this model, and main loci in the cerebellum); and IO, inferior olivary nucleus.
    Cheers,
    As you can see, the CNN has a very interesting and simple architecture with huge potential for adaptive controller. Do not hessitate in using the model, explore its code, adn post any thought, question, comment, issue. The labview project includes a demo for constructing a CNN and employ it in a classical fedback control of a DC FAN. Fig. 2-3 are some pictures of the application:
    Fig 2. 3D construction of the CNN in LabVIEW representing a cube of the cerebellar cortex with edge length 100 um. Red mf, cyan Gr, green Go, yellow Ba/St, purple Pk.
    Fig 3. Screen capture of the demo application in LabVIEW for the CNN used for controlling a Virtual Model of a DC FAN.
    Thanks,

    Hi gerh. Nice observation! Indeed there are many good softwares out there that are optimized for constructing neural network models. However, none of them have the flexibility and the capability of integration with Hardware that LabVIEW provides. You see, the CNN is being developed to be easily incorporated into engineering applications.
    I haven't tried CV, but I think it could be possible to use the CNN with a 1D representation of the image. 

  • Trouble Setting Neural Network Parameter

    I am trying to create a neural network mining model using the DMX code below:
    ALTER MINING STRUCTURE [Application]
    ADD MINING MODEL [Neural Net]
    Person_ID,
    Applied_Flag PREDICT,
    [system_entry_method_level_1],
    [system_entry_method_level_2],
    [system_entry_time_period]
    ) USING MICROSOFT_NEURAL_NETWORK (MAXIMUM_INPUT_ATTRIBUTES = 300, MAXIMUM_STATES=300 )
    WITH DRILLTHROUGH
    but it is giving me this error:
    Error (Data mining): The 'MAXIMUM_INPUT_ATTRIBUTES' data mining parameter is not valid for the 'Neural Net' model.
    I found this thread:
    https://social.msdn.microsoft.com/forums/sqlserver/en-US/9f0cdecd-2e23-48da-aeb3-6ea2cd32ae2b/help-with-setting-algorithm-paramteres that said that the problem was that I was using standard edition instead of enterprise edition. 
    This was indeed the case but we thankfully had an enterprise license available so I did an "Edition Upgrade" (described here:https://msdn.microsoft.com/en-us/library/cc707783.aspx) from
    the sql server install dvd but the statement continues to give this error.  The instance of sql server installed on that machine indicates that the edition was upgraded (@@version is "Microsoft SQL Server 2014 - 12.0.2000.8 (X64)  Feb
    20 2014 20:04:26  Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)") and when I did the upgrade it show that Analysis Services was an installed
    feature so I assumed it was upgrading that as well.  I am not sure how to determine if Analysis Services was upgraded but the registry key of "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSAS12.MYINSTANCE\MSSQLServer\CurrentVersion\CurrentVersion"
    is "12.0.2000.8" (hopefully this is helpful to someone in determining if my AS version is Enterprise).
    Can anyone give me some hints on how to successfully make a neural net model with these parameters?
    Thanks

    Nevermind, it turned out to be a simple solution. I just needed to reboot the server after the edition upgrade (after which I discovered that I needed to remove the "WITH DRILLTHROUGH" clause because neural network models
    don't support it).

  • TargetInvocationException with Multiclass neural network

    Hi all,
    Wondering if anyone is having difficulties with multi class neural networks - I have a custom multiclass neural network with the following definition script:
    input test [101];
    hidden H [101] from test all;
    hidden J [101] from H all;
    output Result [101] softmax from J all;
    I'm running it through the sweep parameters  module and my dataset has the first column as a label (0 - 100) and the next 101 numbers are the input numbers.
    The training does occur since I get this on the log:
    [ModuleOutput] Iter:160/160, MeanErr=5.598084(0.00%), 1480.53M WeightUpdates/sec
    [ModuleOutput] Done!
    [ModuleOutput] Iter:150/160, MeanErr=5.600375(0.00%), 1480.48M WeightUpdates/sec
    [ModuleOutput] Estimated Post-training MeanError = 5.598006
    [ModuleOutput] ___________________________________________________________________
    [ModuleOutput] Iter:151/160, MeanErr=5.600346(0.00%), 1475.91M WeightUpdates/sec
    [ModuleOutput] Iter:152/160, MeanErr=5.600317(0.00%), 1483.43M WeightUpdates/sec
    [ModuleOutput] Iter:153/160, MeanErr=5.600285(0.00%), 1477.52M WeightUpdates/sec
    [ModuleOutput] Iter:154/160, MeanErr=5.600252(0.00%), 1476.20M WeightUpdates/sec
    [ModuleOutput] Iter:155/160, MeanErr=5.600217(0.00%), 1482.20M WeightUpdates/sec
    [ModuleOutput] Iter:156/160, MeanErr=5.600180(0.00%), 1484.14M WeightUpdates/sec
    [ModuleOutput] Iter:157/160, MeanErr=5.600141(0.00%), 1477.28M WeightUpdates/sec
    [ModuleOutput] Iter:158/160, MeanErr=5.600099(0.00%), 1483.68M WeightUpdates/sec
    [ModuleOutput] Iter:159/160, MeanErr=5.600055(0.00%), 1483.56M WeightUpdates/sec
    [ModuleOutput] Iter:160/160, MeanErr=5.600007(0.00%), 1453.19M WeightUpdates/sec
    [ModuleOutput] Done!
    [ModuleOutput] Estimated Post-training MeanError = 5.600238
    [ModuleOutput] ___________________________________________________________________
    [ModuleOutput] DllModuleHost Stop: 1 : DllModuleMethod::Execute. Duration: 00:05:20.1489353
    [ModuleOutput] DllModuleHost Error: 1 : Program::Main encountered fatal exception: Microsoft.Analytics.Exceptions.ErrorMapping+ModuleException: Error 0000: Internal error ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. ---> System.ArgumentException: Right hand side shape must match region being assigned to
    Module finished after a runtime of 00:05:20.3363329 with exit code -2
    Module failed due to negative exit code of -2
    But something seems to break after a few sweeps.
    Regards,
    Jarrel

    Hi Jarrel,
    Sorry for the trouble, this is actually a known defect with multiclass neural networks, defect #3533885.  I've increased the priority of the defect so that it will be addressed sooner. If you need a workaround for this issue I could help
    you, please let me know.  Probably changing the random seed or the number of folds in cross validation within parameter sweep would fix this issue.
    Thank you, Ilya

  • Want Help on Neural Network

    I have developed some code for the implementation of a two layer neural network structure in Labview. The network is supposed to read the training sets of data from a file and should train itself, but it is not working may be because of some error. The network can simulate successfully but is unable to train itself properly. I require this network for the implementaion of a very novel project
    I have marked the whole program with appropriate descriptive tags(see trainig.vi). If some one can tryout and find the error it will be of great help to me. I will then be able to post the correct network for the benefit of others.
    Attachments:
    data1.txt ‏6 KB
    our net.zip ‏75 KB

    I have two suggestions for improving your code and increasing your possibility of troubleshooting it accurately.
    The first suggestion is to not use sequence structures (flat or stacked). If you need to make a part of your code happen after another part of code, consider using a state machine architecture as described here:
    http://zone.ni.com/devzone/cda/tut/p/id/3024
    Additionally, instead of using variables (local or global) transport your data using wires. This way you can be sure to conform to LabVIEW's data flow.
    Both of these things will make your code easier to read and debug.
    Best of luck!

  • Neural network: is there any toolkit?

    Is there any toolkit in order to use neural networks with labview? (I am not an expert about neural networks, I have just been said today to try to solve a problem using neural networks, I even don't know where to start from...well..I am starting from labview!). 
    Solved!
    Go to Solution.

    if you want to just use it and have it simple use this one: https://decibel.ni.com/content/docs/DOC-41891
    Best regards, Piotr
    Certified TestStand Architect
    Certified LabVIEW Architect

  • Neural Networks

    Hello All,
    I did a search in the forums under neural networks. There didn't seem to be much work done with labview and neural networks. I did find a post where someone had developed code for a feed-forward back propogation neural net which is what I'm hoping to use, but it was developed in labview 5.1. I'm using 8.6 and when I tried to open the vi's labview said it was too old to convert to 8.6. Has anyone done any current work with neural networks and labview?
    I'm very familiar with neural networks in matlab. I've also used a matlab script to run some more complex signal processing functions that labview doesn't support. I'm wondering if I could integrate matlab and labview while using a neural network. I could do all my training offline in matlab and then pass my real time data into a matlab script from labview. Does anyone know if this is possible? How would I load an already trained neural net from matlab using the matlab script in labview? My data acquisition is in labview so I'd like to stay in labview if possible. Does any have any ideas? 
    Thanks, Alan Smith

    The first 3 links in this page may be of assistance, from the Developer Zone:
    http://zone.ni.com/devzone/fn/p/sb/navsRel?q=neural
    -AK2DM
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • Error due to network connections during a calculation in partition

    hi all,
    I am excuting a calculation in one of the parttion and i am faced with an error which came up in the log file and it stopped the calculation and proceded to the next calculation.
    the logfile where i encountered the errror is mentioned below the recovery script didnt really excecute and moved to the next script can some one help me in this??
    MAXL> execute calculation PGAPY2.PGA.Recovery;
    OK/INFO - 1012675 - Commit Blocks Interval for the calculation is [60000].
    OK/INFO - 1012670 - Aggregating [ PGA Subcomponent(All members) PGA Rate Codes(All members)] with fixed members [Measures(Bill Count, Base Charge Count, Billed Volume excl. WNA (MCF), Base Cost of Gas Volume (MCF), Total PGA Charge, Base Rate Gas Recovery); Scenario(PY2 Actu.
    OK/INFO - 1012678 - Calculating in parallel with [2] threads.
    OK/INFO - 1012679 - Calculation task schedule [2425,1212,13,1].
    OK/INFO - 1012680 - Parallelizing using [1] task dimensions. .
    OK/INFO - 1012568 - Commit Blocks Interval was adjusted to be [100000] blocks.
    OK/INFO - 1012681 - Empty tasks [713,1212,13,1].
    OK/INFO - 1012675 - Commit Blocks Interval for the calculation is [60000].
    OK/INFO - 1012668 - Calculating [ Measures(Billed Volume excl. WNA (MCF),Base Cost of Gas Volume (MCF))] with fixed members [Scenario(PY2 Actual); PGA Subcomponent(PGA Subcomponent)].
    OK/INFO - 1012678 - Calculating in parallel with [2] threads.
    OK/INFO - 1012679 - Calculation task schedule [2425,1212,13,1].
    OK/INFO - 1012680 - Parallelizing using [1] task dimensions. .
    OK/INFO - 1012568 - Commit Blocks Interval was adjusted to be [100000] blocks.
    OK/INFO - 1012681 - Empty tasks [713,1212,13,1].
    OK/INFO - 1012675 - Commit Blocks Interval for the calculation is [60000].
    OK/INFO - 1012670 - Aggregating [ PGA Rate Codes(All members)] with fixed members [Scenario(PY2 Actual); PGA Subcomponent(PGA Subcomponent)].
    OK/INFO - 1012678 - Calculating in parallel with [2] threads.
    OK/INFO - 1012679 - Calculation task schedule [2425,1212,13,1].
    OK/INFO - 1012680 - Parallelizing using [1] task dimensions. .
    OK/INFO - 1012568 - Commit Blocks Interval was adjusted to be [89164] blocks.
    OK/INFO - 1012681 - Empty tasks [713,1212,13,1].
    OK/INFO - 1012675 - Commit Blocks Interval for the calculation is [60000].
    OK/INFO - 1012668 - Calculating [ Measures(Billed Volume excl. WNA (MCF))] with fixed members [Scenario(PY2 Actual); PGA Rate Codes(PGA Rate Codes); Primary Rate Codes(00WU (40), 01MC (60), 01MI (60), 01MP (60), 01MR (60), 02MB (60), 03LR (25), 03TP (20), 03TR (20), 05NC (60.
    OK/INFO - 1012678 - Calculating in parallel with [2] threads.
    OK/INFO - 1012679 - Calculation task schedule [2425,1212,13,1].
    OK/INFO - 1012680 - Parallelizing using [1] task dimensions. .
    OK/INFO - 1012568 - Commit Blocks Interval was adjusted to be [100000] blocks.
    OK/INFO - 1012681 - Empty tasks [713,1212,13,1].
    OK/INFO - 1012675 - Commit Blocks Interval for the calculation is [60000].
    OK/INFO - 1012670 - Aggregating [ Charge Month(All members) Customer Class(All members) Company(All members) Primary Rate Codes(All members) Geography(All members)] with fixed members [Measures(Bill Count, Base Charge Count, Billed Volume excl. WNA (MCF), Base Cost of Gas Vo.
    OK/INFO - 1012678 - Calculating in parallel with [2] threads.
    OK/INFO - 1012679 - Calculation task schedule [2425,1212,13,1].
    OK/INFO - 1012680 - Parallelizing using [1] task dimensions. .
    OK/INFO - 1012568 - Commit Blocks Interval was adjusted to be [63715] blocks.
    ERROR - 1042017 - Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and.
    ERROR - 1006059 - Invalid block header: Illegal block type -- Please use the IBH Locate/Fix utilities to find/fix the IBH problem.

    Agreed.
    The network error is the client saying it gave up waiting for the server's response, or the server waiting for a pending client request. Typically, this is what happens when the application does a core dump (to use the old-school term).
    Causes are numerous, but results are often data corruption (not always). Ensure that the app isn't in a hung state on the server (for windows, find it's thread and kill it if necessary, for unix, use the Kill -9 on it's PID). Once it's down, the database may recover fully, or may require fixing -- but if it's hosed it's often easier to restore from backup and move on.
    Avoiding the issue in the future: the message to increase netretry/netdelay is misleading. If you get the timeouts on the network often, this is indeed the answer, but if you got the timeout because of a locked database, it's just a transient symptom that is unrelated.
    What caused the lock-up? Intermittent network issues from an ill-timed backup of the server in the next rack, or a power outage in a router, doesn't really matter -- unless it becomes a reaccurring issue -- then it's time to find the network manager and pound some common sense into them (just kidding, I'm sure they will be pro-actively addressing issues like this... well, maybe...). Just let them know that you are having network performance issues if it happens again.
    Meanwhile, validating the database is important to ensure the corruption has been cleaned up. You can do it from EAS or a MAXL statement, but do it before trying to repair it with any tools or recovering from a backup.

  • Example of Neural Network and fuzzy logic with labVIEW

    Does anyone has any kind of examples of fuzzy logic and neural network built with LabVIEW?
    I am particulary interested in predicting machining outputs (cutting force, surface roughnes) starting from machining regimes (parameters such as depth of cut, speed etc.). However, any example would do ... 
    Thank you.

    Warm regards,
    Karunya R
    National Instruments
    Applications Engineer
    Attachments:
    Easy example851.vi ‏29 KB
    App example one851.vi ‏15 KB
    App example two851.vi ‏20 KB

  • Unable To See Network Drive from outside Home Network..even when using DynDNS

    Hi
    I have just got a Freecom Media storage Network Center which I have attached to my WRT150N router.
    I am attemting to permit access to authorised users to files on the drive via the internet (presumably via ftp).
    Here is what I have done:
    Swiitched on DynDNS on the web admin page in the router (used DynDNS.com free service.
    Obtained a user addess which is middle-earth.dyndns-home.com from DynDNS, which is linked to my Virgin Media IP
    Switched on Port Range forwarding with
    Ports 20-21 as ftp
    Port 80 as http
    Port 57 as DNS
    Selected "both protocols"
    Set the IP address to route to 192.168.1.42 - (this is the IP address which the web based software for the Network Drive reports as.
    However the configuration of the TCP/IP protocol in PC attached by LAN cable to the router is "Obtain an IP address automatically" (i.e. dynamic?).  If I change these settings and specify static IP addresses   will I not muck up internet connection to Virgin Media.
    what I was hoping to acheive was typing     ftp:// middle-earth.dyndns-home.com
    in the address bar would let me see the network drive from anywhere!
    I have tried using DNS pinging on Ports 20, and 21 but that tells my ports are closed.  i have also tried switching off my software firewall (Kaspersky) and the router firewall, but this does nothing.
    What have I done wrong or no done!!
    Thanks                                                                                              

    I think the problem is solved.
    The Freecom device needs to use a different part of its interface to be assigned a static IP address, which I have now done.  At my last try it was visible from two independent views outside of my network.
    On another front, I am appalled at Linksys or Cisco, who are not prepared to advice on getting more from their produicts if they are 'out of warranty'.  The online chat tech said I could ring an 0871 premium line number in the UK.  When I rang that, I was told as tthat as my unit is not faulty and was out of warranty I would have to use their pay per incident  service.
    That is not a way to deal with customers.  When it comes to upgrading or replacing network equipment, I will look to a provider who is interested in their customers. 

  • How to prevent BPC from automated calculating for hierarchies / nodes?

    Hi experts,
    I am looking for a practicable way to prevent the system from automated calculating for hierarchies, and especially for nodes within hierarchies?
    Let's say, I have ENTITIES (AS A HIERACHY) in rows and ACCOUNTS in COLUMNS. Now I want the system to block adding up the values for one specific account on node XY. Instead of the sum of all base member entities, the cell for account XY should be left empty.
    Is there a practicable way to deal with this?

    Hi Stefan,
    you can prevent the system to calculate a node by editing the Formula Property  in the Dimension.
    You can insert 0 or null to the Formula field for the specific element.
    But in this case the values from your leaf elements lying under your node wont be calculated  even in higher hierarchy nodes.
    For ex.: with the following structure
    -A
    ---A1
    A11
    A12
    ---B1
    B11
    B12
    If you set A1 to 0, the top node A will only be calculated by values of B1
    Regards
    Jörg

  • Can I create a network object from CIDR format or do I need to use IP - netmask?

    Have a cisco ASA running ASA V 8.3
    Wondering what the correct syntax is or even if it is possible to create a network object from a list of IP's in CIDR format? 
    Typically just do this:
    Create network-object
    object-group network name
    network-object 1.2.3.0 255.255.255.0
    Would like to do this: 
    network-object 1.2.3.0/24
    thanks!

    Hi,
    As far as I know the ASA does not support entering a network/subnet mask in such format in any of its configurations.
    - Jouni

  • How do I get a time value in days, hours and minutes returned to a cell from a calculation of distance divided by speed?

    How do I get a time value in days, hours and minutes returned to a cell from a calculation of distance divided by speed?

    Simon,
    you can use the duration function:
    B4=DURATION(0,0,B1/B2)
    you can further, format the cell as a duration like this using the cell inspector:

  • Can't find network printer from Windows XP and Bonjour

    Does anyone else have a problem printing to network printers from Windows XP machines? I have Bonjour for windows installed and had been printing fine from my Windows XP machines, but lately the XP machines can't seem to find the printers. They work fine from all the Macs (Tiger 10.4.11 and Leopard 10.5.4). I'm wondering if some recent Windows XP patch broke the network printers or the way the printing system deals with ports that reference hostname.local or how hostname.local works on Window XP. I suppose it could have been a Norton Internet Security update too.
    It seems like the XP machines can no longer find printers when they use printerhostname.local for their printer port name
    Bonjour can see the printers when it initially starts up an looks for printers. Bonjour even sets up the printer port as:
    PrinterName.local using port 9101 /* for an Airport Extreme attached printer */
    The only way I can get the printers to print is to create another port with the actual IP address like:
    192.168.1.123 using port 9101
    which kind of defeats the purpose of the PrinterName.local naming convention.
    This doesn't just happen on Airport Extreme attached printers. It also happens on any printer's host name that is qualified with a .local tag.
    The weird thing is that I can ping the printer using the printer.local hostname.
    I'm stumped. Any clues?

    I am also unable to access my printer through Bonjour after updating XP to Service Pack 3.

Maybe you are looking for

  • In bdc po line items

    hi friends, i take header table (ekko) one seperate internal table. and item data(ekpo) one seperate table. iam not taken commen fields in both tables. iam getting data from .txt file all header data and item data into one file internal table. iam al

  • Change pointer for custom message type

    Hi, I have a situation where in, i need to create a custom message type and need to track the changes in the table fields using the Change pointer technique. Can anybody guide me as to what are the step to be followed after i create the custom messag

  • How to set ref_to_message_id in the abap proxy

    Hi, Does anyone have idea, how to set ref_to_message_id of a client proxy in the server proxy? My problem is, I send a message to a server proxy. The server proxy receives the message, retrieves the message id and tries to set the ref_to_message_id o

  • Weblogic.jar problems

    Hi, I am hosting an ejb web service on a different J2EE complaint application server and my security subject is coming from weblogic 10.3. When I use weblogic.jar in my application server classpath and try to start the serverr it throws the following

  • Exposing pk field in Datastore Identity

    Hi all, I was wondering what people's views are on the following - Datastore Identity hides the identity value of objects but there are cetainly cases where it would be very useful to obtain the PK id as it is in the database. My concrete example is