Using threads in a neural network

Hello,
I've written a neural network and I'm wondering how I could use threads in it's execution to 1) increase (more precisely achieve!) learning speed and 2) print out the current error value for the network so that I can see how it is working without using the de-bugger. Basically, i've read the Concurrency tutorial but I'm having trouble getting my head around how I can apply it to my network (must be one of those days!)
I'll give a brief explanation of how i've implemented the NN to see if anybody can shed any light on how I should proceed (i.e. whether it can be threaded, what parts to thread etc.)
The network consists of classes:
Neuron - stores input values to be put into the network and performs the activation functions (just a mathematical operation)
WeightMatrix - contains random weights in a 2-D array with methods for accessing and changing those weights based on output error
Layer - simply an array that stores a collection of neurons
InputPattern - stores the values in an array and target value of a pattern (e.g. for logical AND i would store in pattern[0] = 1; pattern [1] = 1; target = 1;)
PatternSet - set of InputPatterns stored so that they can be input into the network for learning
NeuralNetwork - the main class that I want to thread. This class contains multiple layers and multiple WeightMatrices (that connects the neurons in each layer). The learn algorithm then uses the methods of the previous classes to generate neuron inputs and ouputs and error values given a specific input. It uses a loop that iterates through as follows:
    public float learn(PatternSet p)
        InputPattern currentPattern = null;
        double netError=0f;
        float previousError=0f;
        float outputValue = 0f;
        float sum=0f;
        float wcv=0f;
        float output1=0f;
        float output2=0f;
        float currentError=0f;
        float multiply=0f;
        float outputError = 0f;
        float weight = 0f;
        int count;
        int setPosition=0;
        int setSize = p.getSetSize();
        Neuron outputNeuron = layers[getNumberOfLayers()-1].getNeuron(0);
        //execute learning loop and repeat until an acceptable error value is obtained
        do
             //set input layer neuron values to pattern values
            currentPattern = p.getPattern(setPosition);
            for (int i=0; i<currentPattern.getPatternSize(); i++)
                layers[0].getNeuron(i).setNeuronInput(currentPattern.getValue(i));
            currentError = layers[getNumberOfLayers()-1].getNeuron(0).getOutputError();
            for (int a=0; a<layers[getNumberOfLayers()-1].getNumberOfNeurons(); a++)
                //set target value of output neuron
                layers[getNumberOfLayers()-1].getNeuron(a).setTarget(currentPattern.getTarget());
            //iterates between weight layers - i.e. there will be a weight matrix between each layer of the NN
            for (int i=0; i<getNumberOfLayers()-1; i++)
                for (int j=0; j<layers[i+1].getNumberOfNeurons(); j++)
                    sum =0f;
                    count=0;
                    for (int k=0; k<layers.getNumberOfNeurons(); k++)
weight = weights[i].getWeight(k,j);
outputValue = layers[i].getNeuron(count).getOutput();
multiply = layers[i].getNeuron(count).getOutput() * (weights[i].getWeight(k,j));
//add values
sum = sum + multiply;
count++;
//check that all weighted neuron outputs have been completed
if (count == layers[i].getNumberOfNeurons())
//pass results to neuron
layers[i+1].getNeuron(j).setNeuronInput(sum);
//activate neuron
layers[i+1].getNeuron(j).neuronActivation();
//calculate output error of neuron for given input
layers[i+1].getNeuron(j).calculateOutputError();
//check that output layer has been reached and all neurons have been summed together
if (i == getNumberOfLayers()-2 && count == layers[i].getNumberOfNeurons())
outputError = layers[i+1].getNeuron(j).getOutputError();
netError = layers[i+1].getNeuron(j).getNetError();
for (int a=getNumberOfLayers()-1; a>0; a--)
for (int b=0; b<layers[a-1].getNumberOfNeurons(); b++)
for (int c=0; c<layers[a].getNumberOfNeurons(); c++)
output1 = layers[a-1].getNeuron(b).getOutput();
output2 = layers[a].getNeuron(c).getOutput();
wcv = learningRate * (outputError) * output1 * output2 * (1-output2);
weights[a-1].changeWeight(wcv, b, c);
learningCycle++;
if (setPosition < setSize-1)
setPosition++;
else
setPosition=0;
while (netError > acceptableError && learningCycle < 1000000000);
return currentError;
}At the moment the net doesn't seem to learn to an acceptable degree of accuracy, so I was looking to use threads to monitor it's error value change while I left it running just to ensure that it is working as intended (which it seems to be based on NetBeans debugger output). For the moment, all I'm aiming for is an output of the netError value of the NN at a particular time - would this be possible given my current implementation?
Thanks for the help,
Nick                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

For huge NN and a really multi-core CPU (reporting to OS as a multiple CPU's) one may benefit from having:
- an example pump
- a separate threads for calculation of forward and backwards propagation with in/out queues.
Example pump pumps one forward example to each forward processing thread. It waits for them to complete. Then it reads their output and finds errors to backpropagate. It pumps errors to back-propagation threads. They finds weights correction but does not update weight matrix only are pushing them to the output temporary arrays. The example pump takes those corrections, combines them an updates weights.
Redo from start.
The rule of thumb for high-performance is - avoid locks. If must access data which are changing, make a copy of them in bulk operation, prepare bulk result and read-write in bulk operations.
In this example a whole bunch of weight matrixes and states of neurons are such a kind of data. Each thread should use separate copy and the teaching pump should combine them together. This makes one to split data in two blocks - non-changing, common for all threads (the geometry of NN and weights) and changing, separate for each thread (weight correction, in/out of neurons).
Avoid "new", "clone" and etc. for the preference of System.arraycopy on existing data.
Regards,
Tomasz Sztejka.

Similar Messages

  • Nonlinear system identifica​tion using neural network (black box model)

     Hello, my thesis work is based on "suface EMG- angular acceleration modeling using different system identification techniques"......can anyone help me in doing nonlinear system identification using neural network...

    Well, look at that.  I actually had this problem before--and SOLVED it before!  [facepalm]  I'd forgotten all about it....
    https://bbs.archlinux.org/viewtopic.php?id=140151
    I just added "vmalloc=256" to my linux line, and X started right up!
    [edit] Well, mythtv had the solution, as well:  http://www.mythtv.org/wiki/Common_Probl … _too_small
    Last edited by wilberfan (2012-11-05 19:38:06)

  • Trouble Setting Neural Network Parameter

    I am trying to create a neural network mining model using the DMX code below:
    ALTER MINING STRUCTURE [Application]
    ADD MINING MODEL [Neural Net]
    Person_ID,
    Applied_Flag PREDICT,
    [system_entry_method_level_1],
    [system_entry_method_level_2],
    [system_entry_time_period]
    ) USING MICROSOFT_NEURAL_NETWORK (MAXIMUM_INPUT_ATTRIBUTES = 300, MAXIMUM_STATES=300 )
    WITH DRILLTHROUGH
    but it is giving me this error:
    Error (Data mining): The 'MAXIMUM_INPUT_ATTRIBUTES' data mining parameter is not valid for the 'Neural Net' model.
    I found this thread:
    https://social.msdn.microsoft.com/forums/sqlserver/en-US/9f0cdecd-2e23-48da-aeb3-6ea2cd32ae2b/help-with-setting-algorithm-paramteres that said that the problem was that I was using standard edition instead of enterprise edition. 
    This was indeed the case but we thankfully had an enterprise license available so I did an "Edition Upgrade" (described here:https://msdn.microsoft.com/en-us/library/cc707783.aspx) from
    the sql server install dvd but the statement continues to give this error.  The instance of sql server installed on that machine indicates that the edition was upgraded (@@version is "Microsoft SQL Server 2014 - 12.0.2000.8 (X64)  Feb
    20 2014 20:04:26  Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)") and when I did the upgrade it show that Analysis Services was an installed
    feature so I assumed it was upgrading that as well.  I am not sure how to determine if Analysis Services was upgraded but the registry key of "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSAS12.MYINSTANCE\MSSQLServer\CurrentVersion\CurrentVersion"
    is "12.0.2000.8" (hopefully this is helpful to someone in determining if my AS version is Enterprise).
    Can anyone give me some hints on how to successfully make a neural net model with these parameters?
    Thanks

    Nevermind, it turned out to be a simple solution. I just needed to reboot the server after the edition upgrade (after which I discovered that I needed to remove the "WITH DRILLTHROUGH" clause because neural network models
    don't support it).

  • I want to use time machine on a network drive

    i know in the past Apple holds us hostage to buy their time capsule.  With Lion now launched, is there a work around to allow families to buy there own hard drive and use it for multiple macs for time machine?
    Im fairly new to Mac, convinced my wife to switch, now here we are held captive to buy a TC when we already have a fantastic 2TB drive we purchased for a fraction of the price of a TC.  I dont care about wifi, I just want to save our Mini time machine to our MBP networked drive (or vice versa).
    I currently have Lion on the MBP, and Leopard on the Mini.  After installing and hating Lion on my MBP, i have not "upgraded" the mini.  (although I am currently following the "upgrade back to Leopard" threads)
    Thanks for any help or ideas.

    In the past Apple was not holding you hostage by requiring you to use a Time Capsule for network Time Machine backups. Search on Google for Time machine NAS OS X (or something like that) You will find a number of drives that were working with SL.
    Now, they did change the file protocol in Lion, so some of the NAS servers won't work with Lion, at the moment. So long as it is a decent company that makes the NAS they should be releasing a firmware update to get their NAS working with Lion.
    One of the companies that put out a press release was Western Digital. They said they would be working on an updated firmware so that you could use it as a Time Machine drive.
    Just look around the interwebs and I am sure you will find several that will work.

  • How to set the number of hidden layers for neural network?

     I am using "Multiclass Neural Network" to build a model. I can configure number of hidden nodes, iterations etc., but I couldn't find anything to configure number of hidden layers. How to configure the number of hidden layers in Azure ML?

    Here is the article describing it - https://msdn.microsoft.com/library/azure/e8b401fb-230a-4b21-bd11-d1fda0d57c1f?f=255&MSPPError=-2147217396
    Basically, you will have to use a custom definition script in Net# (http://azure.microsoft.com/en-us/documentation/articles/machine-learning-azure-ml-netsharp-reference-guide/)
    to create hidden layers and nodes per hidden layer

  • Can you get an iphone 5 at target that uses AT&T or Verizon network and use the Straight Talk plan from Walmart?

    Can you get an iphone 5 at target that uses AT&T or Verizon network and use the Straight Talk plan from Walmart? I don't have AT&T or Verizon network and I don't want to get them but I was just wondering if you can get the iphone 5 that is at target and use it for the straight talk plan. I will be getting the straight talk plan but I would like to do some research on the iphone 5 first for straight talk. At walmart the phone is $544.99 and I think it will be a lot cheaper to buy it at target.
    Thank You!

    When I use find file http://www.macupdate.com/app/mac/30073/find-file (which does tend to find files that "Finder" can't), it's not coming up with any other itunes library files that have been modified in the past week, which I know it would have been - unfortunately, I don't have a very recent backup of the hard drive.  It would be a few months old so it wouldn't have the complete library on it....any ideas?  I'm wondering if restarting the computer might help but have been afraid to do so in case it would make it harder to recover anything...I was looking at this thread https://discussions.apple.com/thread/4211589?start=0&tstart=0 in the hopes that it might have a helpful suggestion but it's definitely a different scenario.

  • Open soruce code of Neural Network Model of the Cerebellum

    I would like to share with the community the code of a Neural Network Model of the Cerebellum (CNN). I have been using the CNN for studying the cerebellum and for adaptive robot control. The CNN was developed using Object Oriented Programming (OOP) and a customized Address Event Representation (AER) format. Using OOP and AER allows the construction and evaluation of CNN with more than 80 k neurons, and more than 400 k synaptic connections in real time. The code includes the tools for creating the network, connecting synapses, create the AER format, and a demo for controlling a Virtual Model of a FAN.
    The link to the Cerebellar Network: https://bitbucket.org/rdpinzonm/the-bicnn-model
    Some details of the architecture of the cerebellar model:
    In comparison with traditional ANN or RNN, the CNN has a very peculiar architecture with at least three layers (see below, Fig. 1). Inputs from the external world such as the position of the arms, legs, or sensors from a robot, are carried to the cerebellum via mossy fibers (mf). mfs are then processed in the input layer that includes Golgi (Go) and Granule cells (Gr). The ratio of Gr to mf is around 1000:1, whereas Go to Gr is 15000:1. Because of these numbers it has been proposed that the input layer of the cerebellum transform the input mfs into a sparse representation easing the work of the other layers. The second layer, the molecular layer that could be regarded as a hidden layer, includes Ba/St, Basket and Stellate cells. Their numbers are similar to Go, and their role is still a matter of debate.  The last layer, the output layer, includes Purkinje cells (Pk). There are around 150.000 Gr per one Pk. This is a remarkable feature because the Pk is the only output of the cerebellar cortex. The output of the cerebellar cortex will eventualy reach the motor centers to correct movements.  The CNN includes a plausible learning rule of the cerebellum at synapses between Gr and Pk. It works a an supervised anti-Hebbian rule or a anti-correlation rule in the following way: the teaching signal carrying the information about erroneous motions of the leg, arm, robot, etc, is conveyed by the climbing fiber (cf) to a single Pk. Then, the synaptic weights og Gr-Pk are decreased if there is both cf and Gr activity, whereas if there is not cf (i.e., error) the weights are increased. What this rule means, is that those Gr producing errors have their weight decreased, while those decreasing the error are promoted by increasing their weight. 
    Fig. 1. Neural Network Model of the Cerebellum. mf, Mossy fibers (inputs); Go, Golgi Cells; Gr, Granule cells; Ba/St, Basket and Stellate cells; Pk, Purkinje Cell (Sole output of the cerebellar cortex); cf, climbing fiber (teaching signal); pf, parallel fibers (synapses of pf-Pk are the only adjustable weights in this model, and main loci in the cerebellum); and IO, inferior olivary nucleus.
    Cheers,
    As you can see, the CNN has a very interesting and simple architecture with huge potential for adaptive controller. Do not hessitate in using the model, explore its code, adn post any thought, question, comment, issue. The labview project includes a demo for constructing a CNN and employ it in a classical fedback control of a DC FAN. Fig. 2-3 are some pictures of the application:
    Fig 2. 3D construction of the CNN in LabVIEW representing a cube of the cerebellar cortex with edge length 100 um. Red mf, cyan Gr, green Go, yellow Ba/St, purple Pk.
    Fig 3. Screen capture of the demo application in LabVIEW for the CNN used for controlling a Virtual Model of a DC FAN.
    Thanks,

    Hi gerh. Nice observation! Indeed there are many good softwares out there that are optimized for constructing neural network models. However, none of them have the flexibility and the capability of integration with Hardware that LabVIEW provides. You see, the CNN is being developed to be easily incorporated into engineering applications.
    I haven't tried CV, but I think it could be possible to use the CNN with a 1D representation of the image. 

  • Settings of neural network

    Hi all,
    we are developping an algae reactor which is controlled by a computer with LabView and a neural network. I have found the *.vi's and tried to get this thing running. We have the following in- and outputs
    Inputs
    pH-Value
    concentration
    efficiency
    Outputs
    Light on/off
    CO2 on/off
    The concentration is measured with a laser/fotodiode and the efficiency is measured with two CO2-sensors(what goes in and what comes out). The data is captured by a NI USB 6008.
    Now my question is: How many hidden layers should I use?
    I have three possibilitys:
    2
    4
    less than 6
    Kind regards
    Simon, Zurich University of Applied Sciences

    Hi Simon,
    I guess nobody of the National instruments support staff can help you how exactly to implement your neural network. But we can support you with all data acquisition and LabVIEW programming issues.
    Maybe this helps you a bit:
    Implementing Neural Networks with LabVIEW - An Introduction
    Best Regards,
    Andreas S
    Systems Engineer

  • Want Help on Neural Network

    I have developed some code for the implementation of a two layer neural network structure in Labview. The network is supposed to read the training sets of data from a file and should train itself, but it is not working may be because of some error. The network can simulate successfully but is unable to train itself properly. I require this network for the implementaion of a very novel project
    I have marked the whole program with appropriate descriptive tags(see trainig.vi). If some one can tryout and find the error it will be of great help to me. I will then be able to post the correct network for the benefit of others.
    Attachments:
    data1.txt ‏6 KB
    our net.zip ‏75 KB

    I have two suggestions for improving your code and increasing your possibility of troubleshooting it accurately.
    The first suggestion is to not use sequence structures (flat or stacked). If you need to make a part of your code happen after another part of code, consider using a state machine architecture as described here:
    http://zone.ni.com/devzone/cda/tut/p/id/3024
    Additionally, instead of using variables (local or global) transport your data using wires. This way you can be sure to conform to LabVIEW's data flow.
    Both of these things will make your code easier to read and debug.
    Best of luck!

  • Neural Network Issue

    Please help
    I am trying to run a Neural Network for my company. I have a data set that I already used to train a Logistic Regression function using SAS EG, but I wanted to see if I could better predict using a Neural Networkin SQL SSAS, given that my outcome (equal
    to 1) is a rare event.
    Using the same data set, I created a data mining structure in SQL SSAS similar to the one used to train my logistic regression model in SAS EG. In SQL SSAS, I set a Holdout Seed, so that if I left off at the end of the day I could work with the exact same
    model the next day.
    However, when I ran the model 'the next day' I got different results, my score was different, my classification matrix was different, etc. And not just a little different, very different.
    Based on further investigation, I found I had included some variables that had the potential to cause separation (as determined by SAS Logistic procedure). If I removed these variables, I could recreate my model 'the next day'. However, if I did
    not have SAS EG, I would not have known which variables were problematic without going through quite a bit of work and testing. In SQL SSAS, there is no warning in the log to tell me which variables were causing my issue.
    So my question:
    In SQL SSAS
    Is there a way to train a Neural Network Model and have it identify any variables that are causing a potential problem in the model?
    Or is there a way to extend the training duration to make sure I acheive similar results each time I run the model?
    Any help would be greatly appreciated
    ~S

    Hi TJ,
    A lot of us are still looking at Azure for answers on this one. The problem is ongoing for many. While workarounds are available depending on context, it's nothing to do with the configuration of your servers, but rather to an unresolved problem at
    the Azure end.
    Alexander

  • Neural network: is there any toolkit?

    Is there any toolkit in order to use neural networks with labview? (I am not an expert about neural networks, I have just been said today to try to solve a problem using neural networks, I even don't know where to start from...well..I am starting from labview!). 
    Solved!
    Go to Solution.

    if you want to just use it and have it simple use this one: https://decibel.ni.com/content/docs/DOC-41891
    Best regards, Piotr
    Certified TestStand Architect
    Certified LabVIEW Architect

  • Neural Networks

    Hello All,
    I did a search in the forums under neural networks. There didn't seem to be much work done with labview and neural networks. I did find a post where someone had developed code for a feed-forward back propogation neural net which is what I'm hoping to use, but it was developed in labview 5.1. I'm using 8.6 and when I tried to open the vi's labview said it was too old to convert to 8.6. Has anyone done any current work with neural networks and labview?
    I'm very familiar with neural networks in matlab. I've also used a matlab script to run some more complex signal processing functions that labview doesn't support. I'm wondering if I could integrate matlab and labview while using a neural network. I could do all my training offline in matlab and then pass my real time data into a matlab script from labview. Does anyone know if this is possible? How would I load an already trained neural net from matlab using the matlab script in labview? My data acquisition is in labview so I'd like to stay in labview if possible. Does any have any ideas? 
    Thanks, Alan Smith

    The first 3 links in this page may be of assistance, from the Developer Zone:
    http://zone.ni.com/devzone/fn/p/sb/navsRel?q=neural
    -AK2DM
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • How do I tether my iPhone to my PS3 by using t-mobile or three network?.

    Hi there my question is how do I tether my iPhone to my PS3 using three or t-mobile network?.

    Not really sure what you're talking about, but it has nothing to do with the topic of this thread.  Did you read the other posts?
    The SSID identifying the phone's WiFi personal hotspot isn't hidden, and shows up in other devices without needing to be entered.  If you're talking about the WiFi password that has to be entered in the connecting devices, that can be changed in the phone settings:
    Settings > General > Network > Personal Hotspot > WiFi Password >

  • Programming an neural network (or circuit, or whatever)

    Hi, I want to know which of two options is a standard when coding neural networks, or (for non-AI people), what seems like a better method for program any kind of feed-forward network, like a virtual circuit.
    As I see it, I have two ways of making a network iterate through and finding the end result:
    1) I set the inputs, and then run each layer consecutively: I run the input layer, which sets the inputs of the hidden layer, I run the hidden layer(s), which sets the inputs of the output layer, I run the output layer, and finally query the network to see what the final values of the output layer are.
    2) I ask the output layer what it's values are. By asking any neuron what it's value is, it checks to see what it's inputs are, and thus asks the neurons below it what their values are. This trickles down to the input layer, at which point the input neurons give their answer and this is fed back up.
    The first method seems more intuitive, and seems more realistic: each neuron is fed a value and this iterates up towards the top. The second method, however, is one that we used when designing virtual circuits in a college programming class, and seems perhaps to be a little more elegant.
    Obviously, I think that both give the same final result(?), but it would be good to know if there is some standard, or reason to pick one over the other.
    Thanks for any recommendations!

    Anyway, id rule against the 2nd method. It would be
    hard to implement
    that type of "pull" structure with a ANN because what
    happens at the
    input node?Well, basically I was thinking about doing something like the code below, although I hadn't thought about the details (such as whether I ought to use the 'isInput' flag):
    public float getOutput() {
      float totalInput;
      if (isInput)
         totalInput = somePreSetInput;
      else {
        for ( all the neurons connected to this one )
          totalInput += connectedNeuron.getOutput();
    float output = activationFunction(totalInput);
    return output;
    It seems that this shouldn't be too hard to set up, but I don't know whether it might lead to errors.
    what type of trigger function are you using?Assuming a trigger function is an activation function, I currently have it so it can be set to either sigmoid, tanh, linear, or step. The default is sigmoid, shifted to the right so a total input of zero will give an output close to zero (as opposed to 0.5, as a regular sigmoid would).

  • Using threads in a process of two or more tasks concurrently?

    Dear,
    I need to develop through a Java application in a process that allows the same process using Threads on two or more tasks can be executed concurrently. The goal is to optimize the runtime of a program.
    Then, through a program, display the behavior of a producer and two consumers at runtime!
    Below is the code and problem description.
    Could anyone help me on this issue?
    Sincerely,
    Sérgio Pitta
    The producer-consumer problem
    Known as the problem of limited buffer. The two processes share a common buffer of fixed size. One, the producer puts information into the buffer and the other the consumer to pull off.
    The problem arises when the producer wants to put a new item in the buffer, but it is already full. The solution is to put the producer to sleep and wake it up only when the consumer to remove one or more items. Likewise, if the consumer wants to remove an item from the buffer and realize that it is empty, he will sleep until the producer put something in the buffer and awake.
    To keep track of the number of items in the buffer, we need a variable, "count". If the maximum number of items that may contain the buffer is N, the producer code first checks whether the value of the variable "count" is N. If the producer sleep, otherwise, the producer adds an item and increment the variable "count".
    The consumer code is similar: first checks if the value of the variable "count" is 0. If so, go to sleep if not zero, removes an item and decreases the counter by one. Each case also tests whether the other should be agreed and, if so, awakens. The code for both producer and consumer, is shown in the code below:
    #define N 100                     / * number of posts in the buffer * /
    int count = 0,                     / * number of items in buffer * /
    void producer(void)
    int item;
    while (TRUE) {                    / * number of items in buffer * /
    produce_item item = ()           / * generates the next item * /
    if (count == N) sleep ()           / * if the buffer is full, go to sleep * /
    insert_item (item)                / * put an item in the buffer * /
    count = count + 1                / * increment the count of items in buffer * /
    if (count == 1) wakeup (consumer);      / * buffer empty? * /
    void consumer(void)
    int item;
    while (TRUE) {                    / * repeat forever * /
    if (count == 0) sleep ()           / * if the buffer is full, go to sleep * /
    remove_item item = ()           / * generates the next item * /
    count = count - 1                / * decrement a counter of items in buffer * /
    if (count == N - 1) wakeup (producer)      / * buffer empty? * /
    consume_item (item)      / * print the item * /
    To express system calls such as sleep and wakeup in C, they are shown how to call library routines. They are not part of standard C library, but presumably would be available on any system that actually have those system calls. Procedures "insert_item and remove_item" which are not shown, they register themselves on the insertion and removal of the item buffer.
    Now back to the race condition. It can occur because the variable "count" unfettered access. Could the following scenario occurs: the buffer is empty and the consumer just read the variable "count" to check if its value is 0. In that instant, the scheduler decides to stop running temporarily and the consumer starting to run the producer. The producer inserts an item in the buffer, increment the variable "count" and realizes that its value is now 1. Inferring the value of "count" was 0 and that the consumer should go to bed, the producer calls "wakeup" to wake up the consumer.
    Unfortunately, the consumer is not logically asleep, so the signal is lost to agree. The next time the consumer to run, test the value of "count" previously read by him, shall verify that the value is 0, and sleep. Sooner or later the producer fills the whole buffer and also sleep. Both sleep forever.
    The essence of the problem is that you lose sending a signal to wake up a process that (still) not sleeping. If he were not lost, everything would work. A quick solution is to modify the rules, adding context to a "bit of waiting for the signal to wake up (wakeup waiting bit)." When a signal is sent to wake up a process that is still awake, this bit is turned on. Then, when the process trying to sleep, if the bit waiting for the signal to wake up is on, it will shut down, but the process will remain awake. The bit waiting for the signal to wake up is actually a piggy bank that holds signs of waking.
    Even the bit waiting for the signal to wake the nation have saved in this simple example, it is easy to think of cases with three or more cases in which a bit of waiting for the signal to wake up is insufficient. We could do another improvisation and add a second bit of waiting for the signal to wake up or maybe eight or 32 of them, but in principle, the problem still exists.

    user12284350 wrote:
    Hi!
    Thanks for the feedback!
    I need a program to provide through an interface with the user behavior of a producer and two consumers at runtime, using Threads!So hire somebody to write one.
    Or, if what you really mean is that you need to write such a program, as part of your course work, then write one.
    You can't just dump your requirements here and expect someone to do your work for you though. If this is your assignment, then you need to do it. If you get stuck, ask a specific question about the part that's giving you trouble. "How do I write a producer/consumer program?" is not a valid question.

Maybe you are looking for

  • My Apple TV (2) won't work with new Router.

    Okay, So I have a new router and everything works but streaming my movies to my AppleTV.  I can access Netflix and other apps but not my own iTunes.  I have no clue as to why. This is what I have done so far: I have rebooted my computer, AppleTV, and

  • PDF with no External Links

    Fellow Forum Members, I'm using Acrobat Pro X and is it possible to insert a MP4 video within the PDF file?  I do not want to reference to the MP4 video using a link to some external location. I want the video file embedded inside the PDF file withou

  • Problem displaying gauge properly

    Hello. I have an analysis where I am trying to display a percentage on a gauge, but it doesn't display properly. That gauge is on flash mode. When I configure Presentation Services to display only images, that gauge displays right. This case happens

  • What small camcorder is compatible with mac

    my handycam is not compatible with my macbook pro.  i can download the videos, but I can't open them to view them.  Is there a small camcorder that is compatible with the mac software? 

  • Can i use the recovery of a different computer to recover my hp dv 6015tx?

    os of my dv 6015tx  has just crashed...i tried to install windows 7 professional....so can i use the recovery discs of any other computer to recover it??