Neural network backpropagation

hi
ive created a backpropagation neural network but it doesnt seem to work. i have looked over the code many times but cant seem to debug it. can any1 help me?? i can send my code if needed

ehsanmasaud wrote:
hi
ive created a backpropagation neural network but it doesnt seem to work. i have looked over the code many times but cant seem to debug it. can any1 help me?? i can send my code if neededIf you want help on these forums, you're going to have to provide an [SSCCE |http://sscce.org] and ask a specific question. Narrow it down to a couple of lines that are making it "not work" (and define what "doesn't seem to work" means exactly).
See also: [How To Ask Questions The Smart Way|http://catb.org/~esr/faqs/smart-questions.html]

Similar Messages

  • How to set the number of hidden layers for neural network?

     I am using "Multiclass Neural Network" to build a model. I can configure number of hidden nodes, iterations etc., but I couldn't find anything to configure number of hidden layers. How to configure the number of hidden layers in Azure ML?

    Here is the article describing it - https://msdn.microsoft.com/library/azure/e8b401fb-230a-4b21-bd11-d1fda0d57c1f?f=255&MSPPError=-2147217396
    Basically, you will have to use a custom definition script in Net# (http://azure.microsoft.com/en-us/documentation/articles/machine-learning-azure-ml-netsharp-reference-guide/)
    to create hidden layers and nodes per hidden layer

  • Open soruce code of Neural Network Model of the Cerebellum

    I would like to share with the community the code of a Neural Network Model of the Cerebellum (CNN). I have been using the CNN for studying the cerebellum and for adaptive robot control. The CNN was developed using Object Oriented Programming (OOP) and a customized Address Event Representation (AER) format. Using OOP and AER allows the construction and evaluation of CNN with more than 80 k neurons, and more than 400 k synaptic connections in real time. The code includes the tools for creating the network, connecting synapses, create the AER format, and a demo for controlling a Virtual Model of a FAN.
    The link to the Cerebellar Network: https://bitbucket.org/rdpinzonm/the-bicnn-model
    Some details of the architecture of the cerebellar model:
    In comparison with traditional ANN or RNN, the CNN has a very peculiar architecture with at least three layers (see below, Fig. 1). Inputs from the external world such as the position of the arms, legs, or sensors from a robot, are carried to the cerebellum via mossy fibers (mf). mfs are then processed in the input layer that includes Golgi (Go) and Granule cells (Gr). The ratio of Gr to mf is around 1000:1, whereas Go to Gr is 15000:1. Because of these numbers it has been proposed that the input layer of the cerebellum transform the input mfs into a sparse representation easing the work of the other layers. The second layer, the molecular layer that could be regarded as a hidden layer, includes Ba/St, Basket and Stellate cells. Their numbers are similar to Go, and their role is still a matter of debate.  The last layer, the output layer, includes Purkinje cells (Pk). There are around 150.000 Gr per one Pk. This is a remarkable feature because the Pk is the only output of the cerebellar cortex. The output of the cerebellar cortex will eventualy reach the motor centers to correct movements.  The CNN includes a plausible learning rule of the cerebellum at synapses between Gr and Pk. It works a an supervised anti-Hebbian rule or a anti-correlation rule in the following way: the teaching signal carrying the information about erroneous motions of the leg, arm, robot, etc, is conveyed by the climbing fiber (cf) to a single Pk. Then, the synaptic weights og Gr-Pk are decreased if there is both cf and Gr activity, whereas if there is not cf (i.e., error) the weights are increased. What this rule means, is that those Gr producing errors have their weight decreased, while those decreasing the error are promoted by increasing their weight. 
    Fig. 1. Neural Network Model of the Cerebellum. mf, Mossy fibers (inputs); Go, Golgi Cells; Gr, Granule cells; Ba/St, Basket and Stellate cells; Pk, Purkinje Cell (Sole output of the cerebellar cortex); cf, climbing fiber (teaching signal); pf, parallel fibers (synapses of pf-Pk are the only adjustable weights in this model, and main loci in the cerebellum); and IO, inferior olivary nucleus.
    Cheers,
    As you can see, the CNN has a very interesting and simple architecture with huge potential for adaptive controller. Do not hessitate in using the model, explore its code, adn post any thought, question, comment, issue. The labview project includes a demo for constructing a CNN and employ it in a classical fedback control of a DC FAN. Fig. 2-3 are some pictures of the application:
    Fig 2. 3D construction of the CNN in LabVIEW representing a cube of the cerebellar cortex with edge length 100 um. Red mf, cyan Gr, green Go, yellow Ba/St, purple Pk.
    Fig 3. Screen capture of the demo application in LabVIEW for the CNN used for controlling a Virtual Model of a DC FAN.
    Thanks,

    Hi gerh. Nice observation! Indeed there are many good softwares out there that are optimized for constructing neural network models. However, none of them have the flexibility and the capability of integration with Hardware that LabVIEW provides. You see, the CNN is being developed to be easily incorporated into engineering applications.
    I haven't tried CV, but I think it could be possible to use the CNN with a 1D representation of the image. 

  • Trouble Setting Neural Network Parameter

    I am trying to create a neural network mining model using the DMX code below:
    ALTER MINING STRUCTURE [Application]
    ADD MINING MODEL [Neural Net]
    Person_ID,
    Applied_Flag PREDICT,
    [system_entry_method_level_1],
    [system_entry_method_level_2],
    [system_entry_time_period]
    ) USING MICROSOFT_NEURAL_NETWORK (MAXIMUM_INPUT_ATTRIBUTES = 300, MAXIMUM_STATES=300 )
    WITH DRILLTHROUGH
    but it is giving me this error:
    Error (Data mining): The 'MAXIMUM_INPUT_ATTRIBUTES' data mining parameter is not valid for the 'Neural Net' model.
    I found this thread:
    https://social.msdn.microsoft.com/forums/sqlserver/en-US/9f0cdecd-2e23-48da-aeb3-6ea2cd32ae2b/help-with-setting-algorithm-paramteres that said that the problem was that I was using standard edition instead of enterprise edition. 
    This was indeed the case but we thankfully had an enterprise license available so I did an "Edition Upgrade" (described here:https://msdn.microsoft.com/en-us/library/cc707783.aspx) from
    the sql server install dvd but the statement continues to give this error.  The instance of sql server installed on that machine indicates that the edition was upgraded (@@version is "Microsoft SQL Server 2014 - 12.0.2000.8 (X64)  Feb
    20 2014 20:04:26  Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)") and when I did the upgrade it show that Analysis Services was an installed
    feature so I assumed it was upgrading that as well.  I am not sure how to determine if Analysis Services was upgraded but the registry key of "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSAS12.MYINSTANCE\MSSQLServer\CurrentVersion\CurrentVersion"
    is "12.0.2000.8" (hopefully this is helpful to someone in determining if my AS version is Enterprise).
    Can anyone give me some hints on how to successfully make a neural net model with these parameters?
    Thanks

    Nevermind, it turned out to be a simple solution. I just needed to reboot the server after the edition upgrade (after which I discovered that I needed to remove the "WITH DRILLTHROUGH" clause because neural network models
    don't support it).

  • Artificial Neural Network: NaN from a calculation

    Hi everyone,
    I'm programming a small pattern recognition neural network at the moment but have ran into a small snag. I'm recieving a NaN result from a calculation and I don't know why.
        void calculateOutput()
            double preSigmoidOutput = 0d;
            //Find pre sigmoid output
            Iterator connectionsIterator = connections.iterator();
            while (connectionsIterator.hasNext())
                Connection connection = (Connection) connectionsIterator.next();
                preSigmoidOutput += connection.weight * connection.entryNode.output;
            //Perform Squash
            output = 1 / (1 + Math.log(preSigmoidOutput));
        }I think the problem is occuring at the "output = " line. I've already set a breakpoint and watched what was going on and basically preSigmoidOutput is usually a very small but long number (e.g. 0.05543464575674564) which then at the line "output = " produces a NaN result. Is that mathematical operation overflowing the double datatype? And if so how would I got about stopping this?
    Thanks,
    Chris

    tsith wrote:
    sabre150 wrote:
    BlueWrath wrote:
    Turns out this line:
    double logVal = Math.log(-preSigmoidOutput);Causes a NaN result even though preSigmoidOutput is a valid double (in the case i'm examining now preSigmoidOutput is equal to 0.01067537271542014). Anyone know why i'm getting a NaN result from this code?I hope the value of 'preSigmoidOutput' is less than zero since if it is zero the log() is -infinity and if positive then you are taking the log of a negative number which is illegal.Flag on the play! OP loses 10 yards:-) That will teach me to skip the rest of a post when I know what is wrong!

  • Settings of neural network

    Hi all,
    we are developping an algae reactor which is controlled by a computer with LabView and a neural network. I have found the *.vi's and tried to get this thing running. We have the following in- and outputs
    Inputs
    pH-Value
    concentration
    efficiency
    Outputs
    Light on/off
    CO2 on/off
    The concentration is measured with a laser/fotodiode and the efficiency is measured with two CO2-sensors(what goes in and what comes out). The data is captured by a NI USB 6008.
    Now my question is: How many hidden layers should I use?
    I have three possibilitys:
    2
    4
    less than 6
    Kind regards
    Simon, Zurich University of Applied Sciences

    Hi Simon,
    I guess nobody of the National instruments support staff can help you how exactly to implement your neural network. But we can support you with all data acquisition and LabVIEW programming issues.
    Maybe this helps you a bit:
    Implementing Neural Networks with LabVIEW - An Introduction
    Best Regards,
    Andreas S
    Systems Engineer

  • TargetInvocationException with Multiclass neural network

    Hi all,
    Wondering if anyone is having difficulties with multi class neural networks - I have a custom multiclass neural network with the following definition script:
    input test [101];
    hidden H [101] from test all;
    hidden J [101] from H all;
    output Result [101] softmax from J all;
    I'm running it through the sweep parameters  module and my dataset has the first column as a label (0 - 100) and the next 101 numbers are the input numbers.
    The training does occur since I get this on the log:
    [ModuleOutput] Iter:160/160, MeanErr=5.598084(0.00%), 1480.53M WeightUpdates/sec
    [ModuleOutput] Done!
    [ModuleOutput] Iter:150/160, MeanErr=5.600375(0.00%), 1480.48M WeightUpdates/sec
    [ModuleOutput] Estimated Post-training MeanError = 5.598006
    [ModuleOutput] ___________________________________________________________________
    [ModuleOutput] Iter:151/160, MeanErr=5.600346(0.00%), 1475.91M WeightUpdates/sec
    [ModuleOutput] Iter:152/160, MeanErr=5.600317(0.00%), 1483.43M WeightUpdates/sec
    [ModuleOutput] Iter:153/160, MeanErr=5.600285(0.00%), 1477.52M WeightUpdates/sec
    [ModuleOutput] Iter:154/160, MeanErr=5.600252(0.00%), 1476.20M WeightUpdates/sec
    [ModuleOutput] Iter:155/160, MeanErr=5.600217(0.00%), 1482.20M WeightUpdates/sec
    [ModuleOutput] Iter:156/160, MeanErr=5.600180(0.00%), 1484.14M WeightUpdates/sec
    [ModuleOutput] Iter:157/160, MeanErr=5.600141(0.00%), 1477.28M WeightUpdates/sec
    [ModuleOutput] Iter:158/160, MeanErr=5.600099(0.00%), 1483.68M WeightUpdates/sec
    [ModuleOutput] Iter:159/160, MeanErr=5.600055(0.00%), 1483.56M WeightUpdates/sec
    [ModuleOutput] Iter:160/160, MeanErr=5.600007(0.00%), 1453.19M WeightUpdates/sec
    [ModuleOutput] Done!
    [ModuleOutput] Estimated Post-training MeanError = 5.600238
    [ModuleOutput] ___________________________________________________________________
    [ModuleOutput] DllModuleHost Stop: 1 : DllModuleMethod::Execute. Duration: 00:05:20.1489353
    [ModuleOutput] DllModuleHost Error: 1 : Program::Main encountered fatal exception: Microsoft.Analytics.Exceptions.ErrorMapping+ModuleException: Error 0000: Internal error ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. ---> System.ArgumentException: Right hand side shape must match region being assigned to
    Module finished after a runtime of 00:05:20.3363329 with exit code -2
    Module failed due to negative exit code of -2
    But something seems to break after a few sweeps.
    Regards,
    Jarrel

    Hi Jarrel,
    Sorry for the trouble, this is actually a known defect with multiclass neural networks, defect #3533885.  I've increased the priority of the defect so that it will be addressed sooner. If you need a workaround for this issue I could help
    you, please let me know.  Probably changing the random seed or the number of folds in cross validation within parameter sweep would fix this issue.
    Thank you, Ilya

  • Want Help on Neural Network

    I have developed some code for the implementation of a two layer neural network structure in Labview. The network is supposed to read the training sets of data from a file and should train itself, but it is not working may be because of some error. The network can simulate successfully but is unable to train itself properly. I require this network for the implementaion of a very novel project
    I have marked the whole program with appropriate descriptive tags(see trainig.vi). If some one can tryout and find the error it will be of great help to me. I will then be able to post the correct network for the benefit of others.
    Attachments:
    data1.txt ‏6 KB
    our net.zip ‏75 KB

    I have two suggestions for improving your code and increasing your possibility of troubleshooting it accurately.
    The first suggestion is to not use sequence structures (flat or stacked). If you need to make a part of your code happen after another part of code, consider using a state machine architecture as described here:
    http://zone.ni.com/devzone/cda/tut/p/id/3024
    Additionally, instead of using variables (local or global) transport your data using wires. This way you can be sure to conform to LabVIEW's data flow.
    Both of these things will make your code easier to read and debug.
    Best of luck!

  • Neural network

    Hi guys,
    I have written a neural network with a standard back-propagation learning algroithm which aims to learn a XOR logic function. However, it doesn't seem to work as expected. If i present all patterns to it (0,0, 1,1, 0,1, 1,0) the weights stay pretty static, however if i just train it with 0,0 and 1,1 it seems to work (after alot of epochs, about 300).
    I have included my code below;
    Does anybody have any idea why its not working?
    /* Generated by Together */
    import java.lang.*;
    public class NN
      //weights
      private static double _w1 = 0.5;
      private static double _w2 = 0.9;
      private static double _w3 = 0.4;
      private static double _w4 = 1.0;
      private static double _w5 = -1.2;
      private static double _w6 = 1.1;
      //thresholds
      private static double _t1 = 0.8;
      private static double _t2 = -0.1;
      private static double _t3 = 0.3;
      //neuron outputs
      private static double _n1 = 0;
      private static double _n2 = 0;
      private static double _n3 = 0;
      private static int[][] inputs = new int[4][2];
      private static int[] desired = new int[4];
      public NN()
      public static void main (String[] args)
        inputs[0][0] = 1;
        inputs[0][1] = 1;
        inputs[1][0] = 0;
        inputs[1][1] = 0;
        inputs[2][0] = 1;
        inputs[2][1] = 0;
        inputs[3][0] = 0;
        inputs[3][1] = 1;
        desired[0] = 0;
        desired[1] = 0;
        desired[2] = 1;
        desired[3] = 1;
        int y = 0;
        while(y <= 1000)
          for (int x = 0; x < 2; x++) {
            double actual = feedforward(inputs[x][0], inputs[x][1]);
            updateWeights(inputs[x][0], inputs[x][1], desired[x], actual);
            System.out.println("Input 1 - " + inputs[x][0] + " Input 2 - " + inputs[x][1] + " Desired - " + desired[x]);
            printWeights(desired[x]);
          y++;
      public static void printWeights(double desired)
        System.out.println("Actual - " + _n3);
        System.out.println("Desired - " + desired);
        System.out.println("Weight 1 - " + _w1);
        System.out.println("Weight 2 - " + _w2);
        System.out.println("Weight 3 - " + _w3);
        System.out.println("Weight 4 - " + _w4);
        System.out.println("Weight 5 - " + _w5);
        System.out.println("Weight 6 - " + _w6);
        System.out.println("True Neuron 1 - " + _t1);
        System.out.println("True Neuron 2 - " + _t2);
        System.out.println("True Neuron 3 - " + _t3);
      public static double activation(double input)
        double actResult = 1 / (1 + (Math.exp(-(input))));
        return actResult;
      public static void updateWeights(int inputOne, int inputTwo, double desired, double actual)
        double error = desired - actual;
        System.out.println("Error - " + error);
        double _eg3 = _n3 * (1 - _n3) * error;
        double _eg1 = _n1 * (1 - _n1) * _eg3 * _w5;
        double _eg2 = _n2 * (1 - _n2) * _eg3 * _w6;
        double learningRate = 0.1;
        //update hidden layer bias
        double _ct1 = learningRate * _eg1;
        double _ct2 = learningRate * _eg2;
        //update output layer bias
        double _ct3 = learningRate * _eg3;
        //update hidden layer weights
        double _cw1 = learningRate * inputOne * _eg1;
        double _cw2 = learningRate * inputTwo * _eg1;
        double _cw3 = learningRate * inputOne * _eg2;
        double _cw4 = learningRate * inputTwo * _eg2;
        //update output layyer weights
        double _cw5 = learningRate * _n1 * _eg3;
        double _cw6 = learningRate * _n2 * _eg3;
        //update weights
        _w1 = _w1 + _cw1;
        _w2 = _w2 + _cw2;
        _w3 = _w3 + _cw3;
        _w4 = _w4 + _cw4;
        _w5 = _w5 + _cw5;
        _w6 = _w6 + _cw6;
        //update true neuron weights
        _t1 = _t1 + _ct1;
        _t2 = _t2 + _ct2;
        _t3 = _t3 + _ct3;
      public static double feedforward(int inputOne, int inputTwo)
        double y1 = (inputOne * _w1) + (inputTwo * _w3) - (1 * _t1);
        double y2 = (inputOne * _w2) + (inputTwo * _w4) - (1 * _t2);
        _n1 = activation(y1);
        _n2 = activation(y2);
        double y3 = (_n1 * _w5) + (_n2 * _w6) - (1 * _t3);
        _n3 = activation(y3);
        return _n3;
    }Any help or if you know of any specific NN forums that will be a great help.
    Many thanks
    Alex

    Nothing is wrong with scaning:) I just need to do this usgin a neural network.
    Well i doesn't need to look inside a file, it was just an eg. U can put the 0 and 1 into the code or whatever. It is important that the network should train.

  • Neural Network Issue

    Please help
    I am trying to run a Neural Network for my company. I have a data set that I already used to train a Logistic Regression function using SAS EG, but I wanted to see if I could better predict using a Neural Networkin SQL SSAS, given that my outcome (equal
    to 1) is a rare event.
    Using the same data set, I created a data mining structure in SQL SSAS similar to the one used to train my logistic regression model in SAS EG. In SQL SSAS, I set a Holdout Seed, so that if I left off at the end of the day I could work with the exact same
    model the next day.
    However, when I ran the model 'the next day' I got different results, my score was different, my classification matrix was different, etc. And not just a little different, very different.
    Based on further investigation, I found I had included some variables that had the potential to cause separation (as determined by SAS Logistic procedure). If I removed these variables, I could recreate my model 'the next day'. However, if I did
    not have SAS EG, I would not have known which variables were problematic without going through quite a bit of work and testing. In SQL SSAS, there is no warning in the log to tell me which variables were causing my issue.
    So my question:
    In SQL SSAS
    Is there a way to train a Neural Network Model and have it identify any variables that are causing a potential problem in the model?
    Or is there a way to extend the training duration to make sure I acheive similar results each time I run the model?
    Any help would be greatly appreciated
    ~S

    Hi TJ,
    A lot of us are still looking at Azure for answers on this one. The problem is ongoing for many. While workarounds are available depending on context, it's nothing to do with the configuration of your servers, but rather to an unresolved problem at
    the Azure end.
    Alexander

  • Neural network: is there any toolkit?

    Is there any toolkit in order to use neural networks with labview? (I am not an expert about neural networks, I have just been said today to try to solve a problem using neural networks, I even don't know where to start from...well..I am starting from labview!). 
    Solved!
    Go to Solution.

    if you want to just use it and have it simple use this one: https://decibel.ni.com/content/docs/DOC-41891
    Best regards, Piotr
    Certified TestStand Architect
    Certified LabVIEW Architect

  • Neural Networks

    Hello All,
    I did a search in the forums under neural networks. There didn't seem to be much work done with labview and neural networks. I did find a post where someone had developed code for a feed-forward back propogation neural net which is what I'm hoping to use, but it was developed in labview 5.1. I'm using 8.6 and when I tried to open the vi's labview said it was too old to convert to 8.6. Has anyone done any current work with neural networks and labview?
    I'm very familiar with neural networks in matlab. I've also used a matlab script to run some more complex signal processing functions that labview doesn't support. I'm wondering if I could integrate matlab and labview while using a neural network. I could do all my training offline in matlab and then pass my real time data into a matlab script from labview. Does anyone know if this is possible? How would I load an already trained neural net from matlab using the matlab script in labview? My data acquisition is in labview so I'd like to stay in labview if possible. Does any have any ideas? 
    Thanks, Alan Smith

    The first 3 links in this page may be of assistance, from the Developer Zone:
    http://zone.ni.com/devzone/fn/p/sb/navsRel?q=neural
    -AK2DM
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • Using threads in a neural network

    Hello,
    I've written a neural network and I'm wondering how I could use threads in it's execution to 1) increase (more precisely achieve!) learning speed and 2) print out the current error value for the network so that I can see how it is working without using the de-bugger. Basically, i've read the Concurrency tutorial but I'm having trouble getting my head around how I can apply it to my network (must be one of those days!)
    I'll give a brief explanation of how i've implemented the NN to see if anybody can shed any light on how I should proceed (i.e. whether it can be threaded, what parts to thread etc.)
    The network consists of classes:
    Neuron - stores input values to be put into the network and performs the activation functions (just a mathematical operation)
    WeightMatrix - contains random weights in a 2-D array with methods for accessing and changing those weights based on output error
    Layer - simply an array that stores a collection of neurons
    InputPattern - stores the values in an array and target value of a pattern (e.g. for logical AND i would store in pattern[0] = 1; pattern [1] = 1; target = 1;)
    PatternSet - set of InputPatterns stored so that they can be input into the network for learning
    NeuralNetwork - the main class that I want to thread. This class contains multiple layers and multiple WeightMatrices (that connects the neurons in each layer). The learn algorithm then uses the methods of the previous classes to generate neuron inputs and ouputs and error values given a specific input. It uses a loop that iterates through as follows:
        public float learn(PatternSet p)
            InputPattern currentPattern = null;
            double netError=0f;
            float previousError=0f;
            float outputValue = 0f;
            float sum=0f;
            float wcv=0f;
            float output1=0f;
            float output2=0f;
            float currentError=0f;
            float multiply=0f;
            float outputError = 0f;
            float weight = 0f;
            int count;
            int setPosition=0;
            int setSize = p.getSetSize();
            Neuron outputNeuron = layers[getNumberOfLayers()-1].getNeuron(0);
            //execute learning loop and repeat until an acceptable error value is obtained
            do
                 //set input layer neuron values to pattern values
                currentPattern = p.getPattern(setPosition);
                for (int i=0; i<currentPattern.getPatternSize(); i++)
                    layers[0].getNeuron(i).setNeuronInput(currentPattern.getValue(i));
                currentError = layers[getNumberOfLayers()-1].getNeuron(0).getOutputError();
                for (int a=0; a<layers[getNumberOfLayers()-1].getNumberOfNeurons(); a++)
                    //set target value of output neuron
                    layers[getNumberOfLayers()-1].getNeuron(a).setTarget(currentPattern.getTarget());
                //iterates between weight layers - i.e. there will be a weight matrix between each layer of the NN
                for (int i=0; i<getNumberOfLayers()-1; i++)
                    for (int j=0; j<layers[i+1].getNumberOfNeurons(); j++)
                        sum =0f;
                        count=0;
                        for (int k=0; k<layers.getNumberOfNeurons(); k++)
    weight = weights[i].getWeight(k,j);
    outputValue = layers[i].getNeuron(count).getOutput();
    multiply = layers[i].getNeuron(count).getOutput() * (weights[i].getWeight(k,j));
    //add values
    sum = sum + multiply;
    count++;
    //check that all weighted neuron outputs have been completed
    if (count == layers[i].getNumberOfNeurons())
    //pass results to neuron
    layers[i+1].getNeuron(j).setNeuronInput(sum);
    //activate neuron
    layers[i+1].getNeuron(j).neuronActivation();
    //calculate output error of neuron for given input
    layers[i+1].getNeuron(j).calculateOutputError();
    //check that output layer has been reached and all neurons have been summed together
    if (i == getNumberOfLayers()-2 && count == layers[i].getNumberOfNeurons())
    outputError = layers[i+1].getNeuron(j).getOutputError();
    netError = layers[i+1].getNeuron(j).getNetError();
    for (int a=getNumberOfLayers()-1; a>0; a--)
    for (int b=0; b<layers[a-1].getNumberOfNeurons(); b++)
    for (int c=0; c<layers[a].getNumberOfNeurons(); c++)
    output1 = layers[a-1].getNeuron(b).getOutput();
    output2 = layers[a].getNeuron(c).getOutput();
    wcv = learningRate * (outputError) * output1 * output2 * (1-output2);
    weights[a-1].changeWeight(wcv, b, c);
    learningCycle++;
    if (setPosition < setSize-1)
    setPosition++;
    else
    setPosition=0;
    while (netError > acceptableError && learningCycle < 1000000000);
    return currentError;
    }At the moment the net doesn't seem to learn to an acceptable degree of accuracy, so I was looking to use threads to monitor it's error value change while I left it running just to ensure that it is working as intended (which it seems to be based on NetBeans debugger output). For the moment, all I'm aiming for is an output of the netError value of the NN at a particular time - would this be possible given my current implementation?
    Thanks for the help,
    Nick                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    For huge NN and a really multi-core CPU (reporting to OS as a multiple CPU's) one may benefit from having:
    - an example pump
    - a separate threads for calculation of forward and backwards propagation with in/out queues.
    Example pump pumps one forward example to each forward processing thread. It waits for them to complete. Then it reads their output and finds errors to backpropagate. It pumps errors to back-propagation threads. They finds weights correction but does not update weight matrix only are pushing them to the output temporary arrays. The example pump takes those corrections, combines them an updates weights.
    Redo from start.
    The rule of thumb for high-performance is - avoid locks. If must access data which are changing, make a copy of them in bulk operation, prepare bulk result and read-write in bulk operations.
    In this example a whole bunch of weight matrixes and states of neurons are such a kind of data. Each thread should use separate copy and the teaching pump should combine them together. This makes one to split data in two blocks - non-changing, common for all threads (the geometry of NN and weights) and changing, separate for each thread (weight correction, in/out of neurons).
    Avoid "new", "clone" and etc. for the preference of System.arraycopy on existing data.
    Regards,
    Tomasz Sztejka.

  • Neural network clustering MATLAB

    Hola, tengo una duda con Matlab. Nunca antes he utilizado este software, y ahora mismo lo necesito para desarrollar un self-organising-map. Estoy utilizando Matlab 2013b y la aplicacion Neural Network Clustering. Una vez introduzco los datos a clasificar, los gráficos que me devuelve el programa no me aparecen ni los nombres de las variables ni de las muestras. Si alguien pudiera explicarme cómo hacer para que en Plot Sample Hits aparezcan los nombres de las muestras y que en Weight Planes aparezcan los nombres de las variables o inputs se lo agradecería.

    Hola
    Tu pregunta es de Matlab o de Labview? 
    Saludos
    Felipe RC
    Field Applications Engineer
    National Instruments para Chile, Argentina, Perú, Bolivia, Paraguay y Uruguay
    (Si mi respuesta te ayudo dale click a la estrella para el Kudos)

  • Neural Network in LabVIEW

    I've searched the forums for any support for neural networks in LabIVEW, but found only one post and the original author seems to have disappeared:
    http://forums.ni.com/ni/board/message?board.id=170&message.id=157334&query.id=59698#M157334
    Has anyone seen an implemention of any type of neural network in LabVIEW?
    Thanks,
    Derek

    Hi Derek
    I downloaded the code at the time of the post and still had it kicking about on my hard disk. Just for interest, you know...
    Way over my head
    Hope this helps you out.
    David
    Attachments:
    aNETka_ver_1-0.zip ‏1223 KB

Maybe you are looking for

  • Rdbms ipc message wait event

    When there is archiving done to the recovery area, there is "rdbms ipc message" wait event spike. What to conclude from it? Wait Class      Wait Event     P1 Text     P1     P2 Text     P2     P3 Text     P3     Wait Time (ms) System I/O     log file

  • Condition value routine getting value zero at header level.

    Hi Gurus, I am trying to create one condition value routine in which I am passing xkwert = ( wa_vbap-cmpre * komp-mglme ) / 1000. The condition works fine at item level where I get condition value as desired but at the header level, where it should s

  • Design Patterns - download

    Hi friends! i need j2ee design patterns in a single pdf. But in Blueprints section, the design pattern catalog consists of separate link.is there is any link to download design pattern in single pdf? Please inform me regards., sekar

  • Comparing file content

    Good morning, I am trying to determine if records in one text file are a subset of records in a second file. For example the 1st file(A) contains the names jan eva john And the second(B) contains eva richard jan paul I've started by formulating the p

  • Will I be able to up load video slide shows from Elements to Revel?

    Please let me know if I will be able to up load video slide shows from Elements to Revel? I currently enjoy photoshop.com because I can share photo albums and video slide shows from Elements with my family. I am worried because I don't see any home p