Open soruce code of Neural Network Model of the Cerebellum

I would like to share with the community the code of a Neural Network Model of the Cerebellum (CNN). I have been using the CNN for studying the cerebellum and for adaptive robot control. The CNN was developed using Object Oriented Programming (OOP) and a customized Address Event Representation (AER) format. Using OOP and AER allows the construction and evaluation of CNN with more than 80 k neurons, and more than 400 k synaptic connections in real time. The code includes the tools for creating the network, connecting synapses, create the AER format, and a demo for controlling a Virtual Model of a FAN.
The link to the Cerebellar Network: https://bitbucket.org/rdpinzonm/the-bicnn-model
Some details of the architecture of the cerebellar model:
In comparison with traditional ANN or RNN, the CNN has a very peculiar architecture with at least three layers (see below, Fig. 1). Inputs from the external world such as the position of the arms, legs, or sensors from a robot, are carried to the cerebellum via mossy fibers (mf). mfs are then processed in the input layer that includes Golgi (Go) and Granule cells (Gr). The ratio of Gr to mf is around 1000:1, whereas Go to Gr is 15000:1. Because of these numbers it has been proposed that the input layer of the cerebellum transform the input mfs into a sparse representation easing the work of the other layers. The second layer, the molecular layer that could be regarded as a hidden layer, includes Ba/St, Basket and Stellate cells. Their numbers are similar to Go, and their role is still a matter of debate.  The last layer, the output layer, includes Purkinje cells (Pk). There are around 150.000 Gr per one Pk. This is a remarkable feature because the Pk is the only output of the cerebellar cortex. The output of the cerebellar cortex will eventualy reach the motor centers to correct movements.  The CNN includes a plausible learning rule of the cerebellum at synapses between Gr and Pk. It works a an supervised anti-Hebbian rule or a anti-correlation rule in the following way: the teaching signal carrying the information about erroneous motions of the leg, arm, robot, etc, is conveyed by the climbing fiber (cf) to a single Pk. Then, the synaptic weights og Gr-Pk are decreased if there is both cf and Gr activity, whereas if there is not cf (i.e., error) the weights are increased. What this rule means, is that those Gr producing errors have their weight decreased, while those decreasing the error are promoted by increasing their weight. 
Fig. 1. Neural Network Model of the Cerebellum. mf, Mossy fibers (inputs); Go, Golgi Cells; Gr, Granule cells; Ba/St, Basket and Stellate cells; Pk, Purkinje Cell (Sole output of the cerebellar cortex); cf, climbing fiber (teaching signal); pf, parallel fibers (synapses of pf-Pk are the only adjustable weights in this model, and main loci in the cerebellum); and IO, inferior olivary nucleus.
Cheers,
As you can see, the CNN has a very interesting and simple architecture with huge potential for adaptive controller. Do not hessitate in using the model, explore its code, adn post any thought, question, comment, issue. The labview project includes a demo for constructing a CNN and employ it in a classical fedback control of a DC FAN. Fig. 2-3 are some pictures of the application:
Fig 2. 3D construction of the CNN in LabVIEW representing a cube of the cerebellar cortex with edge length 100 um. Red mf, cyan Gr, green Go, yellow Ba/St, purple Pk.
Fig 3. Screen capture of the demo application in LabVIEW for the CNN used for controlling a Virtual Model of a DC FAN.
Thanks,

Hi gerh. Nice observation! Indeed there are many good softwares out there that are optimized for constructing neural network models. However, none of them have the flexibility and the capability of integration with Hardware that LabVIEW provides. You see, the CNN is being developed to be easily incorporated into engineering applications.
I haven't tried CV, but I think it could be possible to use the CNN with a 1D representation of the image. 

Similar Messages

  • Trouble Setting Neural Network Parameter

    I am trying to create a neural network mining model using the DMX code below:
    ALTER MINING STRUCTURE [Application]
    ADD MINING MODEL [Neural Net]
    Person_ID,
    Applied_Flag PREDICT,
    [system_entry_method_level_1],
    [system_entry_method_level_2],
    [system_entry_time_period]
    ) USING MICROSOFT_NEURAL_NETWORK (MAXIMUM_INPUT_ATTRIBUTES = 300, MAXIMUM_STATES=300 )
    WITH DRILLTHROUGH
    but it is giving me this error:
    Error (Data mining): The 'MAXIMUM_INPUT_ATTRIBUTES' data mining parameter is not valid for the 'Neural Net' model.
    I found this thread:
    https://social.msdn.microsoft.com/forums/sqlserver/en-US/9f0cdecd-2e23-48da-aeb3-6ea2cd32ae2b/help-with-setting-algorithm-paramteres that said that the problem was that I was using standard edition instead of enterprise edition. 
    This was indeed the case but we thankfully had an enterprise license available so I did an "Edition Upgrade" (described here:https://msdn.microsoft.com/en-us/library/cc707783.aspx) from
    the sql server install dvd but the statement continues to give this error.  The instance of sql server installed on that machine indicates that the edition was upgraded (@@version is "Microsoft SQL Server 2014 - 12.0.2000.8 (X64)  Feb
    20 2014 20:04:26  Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)") and when I did the upgrade it show that Analysis Services was an installed
    feature so I assumed it was upgrading that as well.  I am not sure how to determine if Analysis Services was upgraded but the registry key of "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSAS12.MYINSTANCE\MSSQLServer\CurrentVersion\CurrentVersion"
    is "12.0.2000.8" (hopefully this is helpful to someone in determining if my AS version is Enterprise).
    Can anyone give me some hints on how to successfully make a neural net model with these parameters?
    Thanks

    Nevermind, it turned out to be a simple solution. I just needed to reboot the server after the edition upgrade (after which I discovered that I needed to remove the "WITH DRILLTHROUGH" clause because neural network models
    don't support it).

  • Neural Network Issue

    Please help
    I am trying to run a Neural Network for my company. I have a data set that I already used to train a Logistic Regression function using SAS EG, but I wanted to see if I could better predict using a Neural Networkin SQL SSAS, given that my outcome (equal
    to 1) is a rare event.
    Using the same data set, I created a data mining structure in SQL SSAS similar to the one used to train my logistic regression model in SAS EG. In SQL SSAS, I set a Holdout Seed, so that if I left off at the end of the day I could work with the exact same
    model the next day.
    However, when I ran the model 'the next day' I got different results, my score was different, my classification matrix was different, etc. And not just a little different, very different.
    Based on further investigation, I found I had included some variables that had the potential to cause separation (as determined by SAS Logistic procedure). If I removed these variables, I could recreate my model 'the next day'. However, if I did
    not have SAS EG, I would not have known which variables were problematic without going through quite a bit of work and testing. In SQL SSAS, there is no warning in the log to tell me which variables were causing my issue.
    So my question:
    In SQL SSAS
    Is there a way to train a Neural Network Model and have it identify any variables that are causing a potential problem in the model?
    Or is there a way to extend the training duration to make sure I acheive similar results each time I run the model?
    Any help would be greatly appreciated
    ~S

    Hi TJ,
    A lot of us are still looking at Azure for answers on this one. The problem is ongoing for many. While workarounds are available depending on context, it's nothing to do with the configuration of your servers, but rather to an unresolved problem at
    the Azure end.
    Alexander

  • Best networking model for a java network application

    Hi all,
    I am in need of a decision regards the network technology used for a CRM.
    I am thinking about JAX-RPC to build network model for the system.I'll concider functianality details after deciding the network model.
    Which'll be better?.JAX-RPC?.i did'nt used it before.How good is that technology for a client-server CRM which need frequent remote method calls?.
    What about using servlets?.
    My requirement is i'll make XML documents from client and send to the server and server parse the XML file then make necessory method calls and return another XML data to the client.
    I cant use a database server for network functioning.Because the application should run in internet also.So installing a client-server database and only writing client application will not work.
    My application should work equally in internet and local network.
    So which technology can i implement for better results?.
    What about servlets?.
    Servlets can take inputs as web forms.But can servlets work with XML inputs with a swing client(no browser)?.
    What'll be the best technology without EJB?.Becoz only Tomcat can be used for deployment.
    Plz anyone suggest a good opinion.
    What'll the best for me?.

    Many thanks for your efforts,but i already know all what you hav said.Then you should be in fine shape for starting this project.
    But what i need is good insights.You hav told me that
    "start the project and you'll get" .OK.But after doing
    a lot of projects,Onething i know is getting as much
    insights towards a project will help always.Because
    there maybe issues that we dont concider or some
    better patterns.I can't help you based on what you've posted so far. You're talking about gathering good requirements and doing a detailed analysis and design. That's not what this forum is for. You should be doing that with your customers and teammates.
    And i'll be using JDOM.Coz it's quite easy.Very good.
    But some more insights??.
    I need insights in areas like,
    When XML file reach to controller servlet,It should
    dispatch the XML file to many javabean classes
    according to it's purpose.Do you spot anything
    there?.I don't know what those Bean classes will be doing with the XML, so it's hard to say.
    If you're looking to create an XML message broker, where clients send XML requests to be dispatched and then get the response back, you might be talking about an asynchronous application with JMS and queues. That's very different from a servlet, but then I don't know much about what you're really trying to do.
    In either case, whether you use a queue or a servlet, you'll have to think about how you'll go about routing a message. You'll have to figure out how to recognize and associate a particular XML message with the right Java Bean or destination.
    any suggestiens for creating XML files?.Sometimes you just have to bite the bullet and create them. No magic here.
    I already thank to you for ur opinions.But plz take care on these small issues also if u done such
    apps.Lets learn more together so we can make applications very very fast with good insights in
    small deeper areas.I don't mean to be rude, but I can't do any detailed analysis and design with you. That's up to you, your customer, and your teammates.

  • Question re: MapViewer and Network Model functions

    Hi all,
    Quick question re: MapViewer and it's support for the Network Model and the shortest-path functions. If I create a base map with a network theme based on my (large) network, and then use a jdbc_network_query on the same network to do some shortest path analysis will MapViewer use the (hopefully) cached copy of the Network to calculate the shortest path? i.e can I expect good response times once the cache is warmed up?
    Thanks,
    Steve

    Hi Steve,
    MapViewer uses the Network Java library to run the shortest path algorithm. This library is independent of the MapViewer cache, and also not thread safe. So for now the network is always loaded. The load time may be reduced if the request has a MBR, but it is not the ideal solution. There is a working going on, and we hope to avoid this load in future versions.
    Thanks.

  • Opening in code view in CS5

    A while back I posted a discussion seeking information about having PHP files open in code view instead of design view. (CS4).
    I found out that I was able to tell PHP files to open in code view, but for some reason the creators of DW saw it fit to fix it so that if you force files to open in code view, then you could not switch to design view by any means.
    As a PHP coder I rearely go into design or split/design view, but I do have an occaisional need to do so.
    My question is, has this been fixed in CS5? In other words, can I have my PHP files set to ALWAYS open in code view, but still have the option to simply click on the design/split button and go into design mode?
    Thanks,
    Cy

    I did as you suggested and this morning when I opened DW, then opened the PHP file I was last editing yesterday. The same thing happened. It opened the file in design view.
    This particular file is a class and would never really be looked at in design. Nevertheless, though I NEVER put it in design, it opened it that way. So the argument that a file will open in the last view it was in is not valid here either.
    I just don't get it. Everybody I have told about this agrees that it is down-right unreasonable to lock out the ability to chose Split or Design just because you told DW to always open PHP files on code view from the preferences. The idea wreaks of a bunch of programmers deciding on usability, which they evidently don't understand in their own little world. I've worked at a major software development company and I know far too well the mentality of programmers who develop far beyond my capabilities.
    DW devs, please take note: Just because we want to force open a file in CODE view doesn't mean we don't want to switch to design view at some point for that same file. Forcing us to open files in design is like telling us you know better, and believe me, you don't.
    It's really simple - Allow us to set a file type to open only in code view, but don't lock us out by disabling the buttons to switch to split or design if we want.
    Cy
    PS: A pic of what happens when I force .php files to open in code view:
    As you can see, Split and Design are disabled. WHY force this?

  • Nonlinear system identifica​tion using neural network (black box model)

     Hello, my thesis work is based on "suface EMG- angular acceleration modeling using different system identification techniques"......can anyone help me in doing nonlinear system identification using neural network...

    Well, look at that.  I actually had this problem before--and SOLVED it before!  [facepalm]  I'd forgotten all about it....
    https://bbs.archlinux.org/viewtopic.php?id=140151
    I just added "vmalloc=256" to my linux line, and X started right up!
    [edit] Well, mythtv had the solution, as well:  http://www.mythtv.org/wiki/Common_Probl … _too_small
    Last edited by wilberfan (2012-11-05 19:38:06)

  • Neural Networks

    Hello All,
    I did a search in the forums under neural networks. There didn't seem to be much work done with labview and neural networks. I did find a post where someone had developed code for a feed-forward back propogation neural net which is what I'm hoping to use, but it was developed in labview 5.1. I'm using 8.6 and when I tried to open the vi's labview said it was too old to convert to 8.6. Has anyone done any current work with neural networks and labview?
    I'm very familiar with neural networks in matlab. I've also used a matlab script to run some more complex signal processing functions that labview doesn't support. I'm wondering if I could integrate matlab and labview while using a neural network. I could do all my training offline in matlab and then pass my real time data into a matlab script from labview. Does anyone know if this is possible? How would I load an already trained neural net from matlab using the matlab script in labview? My data acquisition is in labview so I'd like to stay in labview if possible. Does any have any ideas? 
    Thanks, Alan Smith

    The first 3 links in this page may be of assistance, from the Developer Zone:
    http://zone.ni.com/devzone/fn/p/sb/navsRel?q=neural
    -AK2DM
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • How to set the number of hidden layers for neural network?

     I am using "Multiclass Neural Network" to build a model. I can configure number of hidden nodes, iterations etc., but I couldn't find anything to configure number of hidden layers. How to configure the number of hidden layers in Azure ML?

    Here is the article describing it - https://msdn.microsoft.com/library/azure/e8b401fb-230a-4b21-bd11-d1fda0d57c1f?f=255&MSPPError=-2147217396
    Basically, you will have to use a custom definition script in Net# (http://azure.microsoft.com/en-us/documentation/articles/machine-learning-azure-ml-netsharp-reference-guide/)
    to create hidden layers and nodes per hidden layer

  • Designing a network model

    Hi,
    Let me start saying that im pretty newbie still considering networking. I have studied CCNA and CCNP in my school and am now working on my final work which is to design a basic network model that can be applied into small to medium sized companies.
    The network core devices are a Cisco Catalyst 3550 Switch and a Cisco PIX 515E (7.0) The C3550 will handle the traffic inside the network and connection outwards will go trough PIX Firewall.
    In the work im going to divide the ports in the switch to 3 different VLANs that will be for the assumed different departments of the company. (production,offices,administration/servers etc more added if needed)
    Im making Access-lists for every VLAN and I am wondering should I only use these ACLs to set what kind of traffic goes between the VLANs in the companys inside network and let PIX handle the traffic that enters and leaves the network? Should i have an ACL in the switch already preventing somekind of traffic going forward to the PIX?
    I have found it abit hard building access-lists for both inbound and outbound VLAN traffic as i feel i have to open alot of ports to get the most basic traffic flowing without problem in the inside network.(Programs using ports > 1024 in the return traffic getting blocked in the return packets, unless i open alot of those larger port numbers)
    Should i just limit what traffic can exit a VLAN and leave the rest of traffic flow inspection for PIX to handle? Will this provide enough security to the network provided the end stations have proper software protection and the switch is secured to prevent adding of unwanted networking devices. Im kind of unsure of the PIX device itself as my studies never crossed path with it so never got to use it before this point.
    Any views on how to handle the security in different points of the network would be greatly appriciated.
    - Jouni Forss

    Thanks for the fast reply,
    As we havent gone indepth with securing a network in our studies I feel the need to find as much info on the best practices to secure a network. All our studies have given a pretty narrow look into the ways to do that.
    Im pretty sure i will go with applying outbound traffic ACLs to each VLAN and after the switch has been secured will move onto configuring the PIX.
    Basicly the main idea is to have all the different departments connect to the server
    VLAN for resources. Only the office and admin/server VLAN will have connection to outside world. This is ofcourse just a basic idea to start building the configuration on and the ACLs would probably change depending on the real life application situation.
    Also one point was to build a possibility for VPN connections to the server VLAN from outside world which is another thing i need to get into after the switch. These connection would be coming from perhaps home office or such places with DSL connection to perform some remote management on the servers and such.
    The customers using this type of network model would be mostly behind slow connections and there wouldnt be any high load traffic going out or inside the network. (DSL etc connections)
    By reading info on the PIX i presume that in this situation it would be best to use it in Transparent mode between the C3550 and the DSL modem in question. Or maybe use PIX in routed mode and configure the outside interface to get its IP address with DHCP from the DSL modem? Or maybe some static configuration would be better there.
    One thing i would like to know about the PIX is that does it have some basic settings that would make it possible to basicly insert it to the network and it would provide some basic protection already? I guess if theres some good base that i could start building the configuration suited for the network in question.
    I find myself lacking alot of basic information concerning Firewalls/PIX even though it should be really essential in my studies. Thats why i would like to know how much does PIX ability to keep the network secured depend on the the right type of configuration or does it perform most of its measures to intercept harmfull traffic automatically with some built in methods? (Not really sure on my choice of words)
    I guess at this point i would really appriciate any tips that any of you expirienced PIX users could give me to set me on way configuring my firewall to provide sufficient protection for the network.
    - Jouni Forss

  • Network Model - Shortest Path

    Hi all,
    I have created spatial network containing non lrs sdo_geometry objects in Oracle 10g (Release 2). In this network model there are 33208 nodes and 42406 links.
    Now I need to write a java program to find shortest route between two selected nodes. Here is snippet of my source code.
    Connection connection = DriverManager.getConnection(dbUrl, databaseUser,   databasePassword);
    boolean readForUpdate = false;
    Network net= NetworkManager.readNetwork(connection, "SDO_ROAD_NETWORK", readForUpdate);
    Path path = NetworkManager.shortestPath(net, startNodeId ,endNodeId);
    System.out.println ("total distance " path.getCost() );+
    Link[] linkArray = path.getLinkArray();
    But this will throws an exception - Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    It was working fine for 1000 nodes and 1000 links. I tried by changing java options like -Xms and Xmx parameters but same result.
    Then I tried to find shortest route using pl/sql using following.
    DECLARE
    testNet VARCHAR2(30) := 'SDO_ROAD_NETWORK';
    startNode NUMBER := 120150;
    endNode NUMBER :=1740034;
    path NUMBER;
    linkArray SDO_NUMBER_ARRAY;
    BEGIN
    sdo_net_mem.network_manager.read_network('SDO_ROAD_NETWORK', 'FALSE');
    dbms_output.put_line('Loading finished');
    path := SDO_NET_MEM.NETWORK_MANAGER.SHORTEST_PATH_DIJKSTRA('SDO_ROAD_NETWORK', startNode, endNode);
    IF path IS NULL THEN
    dbms_output.put_line('route not found');
    return;
    END IF;
    linkArray := SDO_NET_MEM.PATH.GET_LINK_IDS(testNet, path);
    FOR i IN linkArray.first..linkArray.last
    LOOP
    dbms_output.put_line('Link -- ' || linkArray(i) || ' ' ||
    SDO_NET_MEM.LINK.GET_NAME (testNet, linkArray(i)) || ' ' ||
    SDO_NET_MEM.LINK.GET_COST (testNet, linkArray(i)));
    END LOOP;
    END;
    +/+
    But this takes nearly 4 minutes to just read the nework (sdo_net_mem.network_manager.read_network).
    Finally I dowloaded standalone java client application NDM Network Editor from OTN. This application loads entire network within 25 seconds and finds shortest route within 5 seconds.
    Please guide me how can I write improved code reading network. My requirement is to get shortest path between two nodes
    Thanks,
    Sujnan

    Hi Sujnan
    In the past there have been some performance issue for the Oracle JVM. Not sure if this addressed in the latest releases (10.r2 or 11).
    Performance Status OJVM used for SDO Network data Model 10R2.0.2
    Maybe the oracle guys can give an update.
    Luc

  • Network Model - AStar dies, Dijkstra is OK

    Hi all,
    I'm using the Network Model to model a road network. I have a problem when a "complicated path" can't be found. By this I mean two nodes that aren't connected (one of them is an island with no connectivity in the nertwork) but the intervening network "space" is complicated and extensive. When the intervening "space" is smaller then there isn't a problem.
    When I use shortestPathAStar() then my Java app just eats up memory and CPU and eventually runs out of Java heap space. When I use shortestPathDijkstra() then the code correctly works out there is not a path between the two.
    The problem for me is that i'd really like to use AStar. I'm calculating many paths through the network and A* is just so much faster....
    Are there any bugs or known issues in this area, or has anyone else seen anything? I can't find anything on Metalink.
    Thanks
    Steve

    AStart algorithm uses more memory compared to Dijkstra Algorithm in NDM API.
    If the network is not fully connected, you could use isReachable(network,startNodeID,endNodeID) method first to find out if there exists at least one path before computing the shortest path.
    The overhead for this method is small compared to the shortest path algorithms.
    You could also try to increase the heap size (using Java -Xmx heapSize) when running your application.
    By the way, what is the size of the network and what is the java heap size you use?
    What version are you using?

  • Artificial Neural Network: NaN from a calculation

    Hi everyone,
    I'm programming a small pattern recognition neural network at the moment but have ran into a small snag. I'm recieving a NaN result from a calculation and I don't know why.
        void calculateOutput()
            double preSigmoidOutput = 0d;
            //Find pre sigmoid output
            Iterator connectionsIterator = connections.iterator();
            while (connectionsIterator.hasNext())
                Connection connection = (Connection) connectionsIterator.next();
                preSigmoidOutput += connection.weight * connection.entryNode.output;
            //Perform Squash
            output = 1 / (1 + Math.log(preSigmoidOutput));
        }I think the problem is occuring at the "output = " line. I've already set a breakpoint and watched what was going on and basically preSigmoidOutput is usually a very small but long number (e.g. 0.05543464575674564) which then at the line "output = " produces a NaN result. Is that mathematical operation overflowing the double datatype? And if so how would I got about stopping this?
    Thanks,
    Chris

    tsith wrote:
    sabre150 wrote:
    BlueWrath wrote:
    Turns out this line:
    double logVal = Math.log(-preSigmoidOutput);Causes a NaN result even though preSigmoidOutput is a valid double (in the case i'm examining now preSigmoidOutput is equal to 0.01067537271542014). Anyone know why i'm getting a NaN result from this code?I hope the value of 'preSigmoidOutput' is less than zero since if it is zero the log() is -infinity and if positive then you are taking the log of a negative number which is illegal.Flag on the play! OP loses 10 yards:-) That will teach me to skip the rest of a post when I know what is wrong!

  • TargetInvocationException with Multiclass neural network

    Hi all,
    Wondering if anyone is having difficulties with multi class neural networks - I have a custom multiclass neural network with the following definition script:
    input test [101];
    hidden H [101] from test all;
    hidden J [101] from H all;
    output Result [101] softmax from J all;
    I'm running it through the sweep parameters  module and my dataset has the first column as a label (0 - 100) and the next 101 numbers are the input numbers.
    The training does occur since I get this on the log:
    [ModuleOutput] Iter:160/160, MeanErr=5.598084(0.00%), 1480.53M WeightUpdates/sec
    [ModuleOutput] Done!
    [ModuleOutput] Iter:150/160, MeanErr=5.600375(0.00%), 1480.48M WeightUpdates/sec
    [ModuleOutput] Estimated Post-training MeanError = 5.598006
    [ModuleOutput] ___________________________________________________________________
    [ModuleOutput] Iter:151/160, MeanErr=5.600346(0.00%), 1475.91M WeightUpdates/sec
    [ModuleOutput] Iter:152/160, MeanErr=5.600317(0.00%), 1483.43M WeightUpdates/sec
    [ModuleOutput] Iter:153/160, MeanErr=5.600285(0.00%), 1477.52M WeightUpdates/sec
    [ModuleOutput] Iter:154/160, MeanErr=5.600252(0.00%), 1476.20M WeightUpdates/sec
    [ModuleOutput] Iter:155/160, MeanErr=5.600217(0.00%), 1482.20M WeightUpdates/sec
    [ModuleOutput] Iter:156/160, MeanErr=5.600180(0.00%), 1484.14M WeightUpdates/sec
    [ModuleOutput] Iter:157/160, MeanErr=5.600141(0.00%), 1477.28M WeightUpdates/sec
    [ModuleOutput] Iter:158/160, MeanErr=5.600099(0.00%), 1483.68M WeightUpdates/sec
    [ModuleOutput] Iter:159/160, MeanErr=5.600055(0.00%), 1483.56M WeightUpdates/sec
    [ModuleOutput] Iter:160/160, MeanErr=5.600007(0.00%), 1453.19M WeightUpdates/sec
    [ModuleOutput] Done!
    [ModuleOutput] Estimated Post-training MeanError = 5.600238
    [ModuleOutput] ___________________________________________________________________
    [ModuleOutput] DllModuleHost Stop: 1 : DllModuleMethod::Execute. Duration: 00:05:20.1489353
    [ModuleOutput] DllModuleHost Error: 1 : Program::Main encountered fatal exception: Microsoft.Analytics.Exceptions.ErrorMapping+ModuleException: Error 0000: Internal error ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. ---> System.ArgumentException: Right hand side shape must match region being assigned to
    Module finished after a runtime of 00:05:20.3363329 with exit code -2
    Module failed due to negative exit code of -2
    But something seems to break after a few sweeps.
    Regards,
    Jarrel

    Hi Jarrel,
    Sorry for the trouble, this is actually a known defect with multiclass neural networks, defect #3533885.  I've increased the priority of the defect so that it will be addressed sooner. If you need a workaround for this issue I could help
    you, please let me know.  Probably changing the random seed or the number of folds in cross validation within parameter sweep would fix this issue.
    Thank you, Ilya

  • Want Help on Neural Network

    I have developed some code for the implementation of a two layer neural network structure in Labview. The network is supposed to read the training sets of data from a file and should train itself, but it is not working may be because of some error. The network can simulate successfully but is unable to train itself properly. I require this network for the implementaion of a very novel project
    I have marked the whole program with appropriate descriptive tags(see trainig.vi). If some one can tryout and find the error it will be of great help to me. I will then be able to post the correct network for the benefit of others.
    Attachments:
    data1.txt ‏6 KB
    our net.zip ‏75 KB

    I have two suggestions for improving your code and increasing your possibility of troubleshooting it accurately.
    The first suggestion is to not use sequence structures (flat or stacked). If you need to make a part of your code happen after another part of code, consider using a state machine architecture as described here:
    http://zone.ni.com/devzone/cda/tut/p/id/3024
    Additionally, instead of using variables (local or global) transport your data using wires. This way you can be sure to conform to LabVIEW's data flow.
    Both of these things will make your code easier to read and debug.
    Best of luck!

Maybe you are looking for