Random generator for given distribution function

I need to write a random generator, not for normal distribution only but for any distribution function.
That function can be a black-box distribution function, or that can be a table [ x | f(x) ].
For those not familiar with this term, distribution function is f(x) that returns the posibility of that the radmon number is lesser than x. f(-inf) = 0, f(+inf) = 1, x > y => f(x) >= f(y).

I don't have my stats text books with me, but Google is your friend! :-)
This looks like what we coverd in stats class:
http://www.mathworks.com/access/helpdesk/help/toolbox/stats/prob_di7.shtml
>
Inversion
The inversion method works due to a fundamental theorem that relates the uniform distribution to other continuous distributions.
If F is a continuous distribution with inverse F^-1, and U is a uniform random number, then F^-1(U) has distribution F.
So, you can generate a random number from a distribution by applying the inverse function for that distribution to a uniform random number. Unfortunately, this approach is usually not the most efficient.
How do you get a random number from a uniform distribution? Easy: with java.util.Random, or some other java math package (I like http://hoschek.home.cern.ch/hoschek/colt/index.htm).
The first link details some other methods, as will most stats text books.

Similar Messages

  • How to make a random generator with a max and a minimum

    hello
    can anybody telle me how to make a random generator for numbers with a maximum and minimum input
    thanks in advance

    Hi suske,
    If you want a random day for a given month, this is how I would do it.
    Hope this helps,
    -D
    Message Edited by Darren on 02-02-2006 01:35 PM
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman
    Attachments:
    Days_per_month.jpg ‏32 KB

  • QCPatch 'Random Generator' Broken Seed?

    Hi there all,
    Just putting this out there - perhaps I'm wrong, but want some other QC developers who are more experienced to compare with.
    Quartz Composer Patch "Random Generator" - one of the inbuilt patches available.
    Its purpose is to generate random numbers between a range, and supply the number on its output port.
    To this end, it works fine.
    My problem with this patches behavior occurs when it is nested in a macro patch, and that patch is then duplicated (purpose obvious).
    *The duplicated macro patch appears to also copy the Random Generators' base seed.*
    For example I have the same picture resulting from Random Generators nested inside a macro patch that contains processing for a random choice of image to a sprite.
    The only way i could work around this (and this may help others with the same problem) was to Publish the Input ports all the way up to the root level patch, and supply new instances of a Random Generator for each instance of the duplicated macro patch.
    Then I received fresh random seeds from the generator and different images were selected for output to my nested sprite.
    Awfully inconvenient given that each macro had 4 levels of nesting using objects like render in image, 3D transformation and Lighting - all patches that require nesting, and also reset your ports being passed to a parent, meaning you have to - 're-map' the Published Inputs with every layer of nesting. Not to mention the fact that the root level of the document is cluttered all to **** with millions of Random Generator patches.
    Anyone get this behavior?
    Please let me know if I'm just screwing my design up, or if this is encountered by you too and we should report it as a bug. (Ive only been devving QC for several weeks straight, so I'm want to actually report it as a bug yet).
    Many thanks in advance,
    Wilks.

    A random created with the same seed will give the same set of random numbers.
    Take out the Random creation from the loop
    try
                 BufferedWriter out = new BufferedWriter(new FileWriter("c:/test.txt"));
                    //Create Random here.
                     Random r = new Random( );//No need for any seed
                 for (int i = 0; i < 1000; i++)
                      out.write(String.valueOf(r.nextInt(9)));
                 out.close();
    catch (IOException e)
    [url http://www.feedfeeds.com/feedtv]Feed Your Feeds

  • Probability Distribution Functions in Oracle PL/SQL

    Hello,
    I have the need to calculate some standard probability distribution functions as part of my PL/SQL code. Does Oracle DB offer any packages that will enable such calculations to be performed?
    In parituclar, i am looking for cumulative distribution functions for Normal distribution and T-distribution
    Thanks,
    Kartik

    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions29a.htm#82888

  • How to produce the digital random generator,which counts 0 and 1 randomly

    hi , i want to generate the digital random generator for my project,which counts the 0 and 1 randomly. can anybody help me in doing this.

    Your question has been phrased in a way to cause confusion to what you actually want to achieve. Are you trying to display the random generator like a bit stream on a graph ie:
    If so then this vi will do that:
    Every time you press the 'Generate' Button it will create a digital array according to the 10 bit random number generated and display on a graph.
    There is a way of doing this using a 'Digital waveform graph', i personally have never used it and after about 5 minutes of just looking into it for you gave up It is something i should spend the time to look into as it presents your digital data nicely, showing the 0's and 1's within the graph.
    If i have misunderstood what you want again i apologise
    Rgs,
    Lucither
    Message Edited by Lucither on 05-10-2010 06:27 AM
    "Everything should be made as simple as possible but no simpler"

  • Distribution function in work center

    Hi All,
    What is the use of  distribution function, distribution strategy in work center.
    All responses will be awarded accordingly.
    Regards,
    jejesh
    Edited by: jejesh yal on Dec 29, 2008 11:41 AM

    Hi Jejesh,
    Well, You use a distribution to specify how the system distributes the capacity requirements of an operation that extends over several days; that is, how much capacity requirement is allocated to the affected capacity type in the work center on each of the days.
    A distribution is described by a distribution function and a distribution strategy.
    Distribution function
    You define the distribution function using basic values. The basic values specify after what percentage of operating time a particular percentage of the capacity requirement for the operation is loaded to the affected capacity type. The system-internal default basic value for a distribution function defines that after 0% of operating time, 0% of the capacity requirements are loaded.
    Distribution strategy
    You use the distribution strategy to specify:
    Whether the system is to distribute the capacity requirements between the earliest or the latest start and finish dates of an operation
    Whether the distribution of the capacity requirements is to be discrete or continuous.
    I hope this would help.
    Best Regards,
    Rahul

  • Is there any function module for getting distribution list name

    Hi all,
    Is there any function module for getting distribution list name when there is same description for two distribution list name.
    or
    help me how to fetch the correct distribution name when there is same description.
    In order to send mails.
    Tell me ASAP.
    thanks
    sagar.

    http://www.sapbrainsonline.com/REFERENCES/FunctionModules/SAP_function_modules_list.html
    list of Fms

  • Function module to get PPM for given product and/or location

    Hi,
    I want to check whether any PPM is existing for given product, location.which function module i can use for the same.

    There is a BAPI that might help - BAPI_PPMSRVAPS_GETLIST which has product and location selection criteria.
    Regards
    Laurence

  • VIs for Agilent 33250A (function generator) and SR-844 (lock-in)

    I am new to Labview. I was unable to find VIs for Agilent 33250A (function generator) and SR-844 (lock-in). Can someone help me find these?
    Solved!
    Go to Solution.

    What did you actually search for? I just went to the Instrument Driver Network, entered '33250A', found it's driver, entered '844', and also found it's driver. I did not try this from the Tools menu in LabVIEW but it should also have found them.

  • Randomly generate an ID for SQL Create Table command

    Hi,
    I need help to randomly generate an integer, which i can use as a primary key for the ID of a table field. I have no idea how to do this so that it generates a different number every single time.
    Right now it just increments the ID by 1. Maybe, is there a way to get the last ID value of the table field and then increment starting from that one? thanks

    Your analysis is flawed! One of my main reasons for
    doing this is to reduce the load on the db server. I
    don't have to go to the db to get an id every time I
    need one. Every now and again (usually less than one
    in a 2000) I have to repeat an insert operation.
    You already stated that this was the reason, so I used that as part of my assessment. With most, perhaps all databases, the ability to increment a key is done as part of the insert operation. It requires no additional 'trip' to the database to get the ID. There is so much wrong with your idea, I hardly know where to start. I wouldn't be so firm except you stated that you do this all the time. Even using it once is very questionable I wouldn't want this idea to profilerate to other peoples systems without comment.
    You also said:
    On one project I was having to place usage transaction records
    into a database with each one needing an ID. I started using your
    technique but it was taking almost as much time as the insert of
    each transaction into the database.I agree, and I think dcminter agreed that there are less expensive practices then max(fld)+1 which are in fact using the vendor specific capabilities. However, if this was taking as long as your insert, there was probably something else wrong. For example, if the field you are getting the max value for isn't indexed, or isn't the first field in a contatenated index (an index of multiple fields), then some databases (there are exceptions like redbrick) will require either a full table scan, an index scan or an ackward index lookup to give back the max value. It shouldn't be that way, but it is and you have undestand how your database is resolving max() in order to make the best decisions.
    I stand by analysis and I would suggest that rather then use your method "all the time" that you save it for that unusual situation where nothing else will work for you.

  • Enabling the Export function for a distribution list in 'Private folders'

    Dears,
    Could someone please guide to the appropriate authorization that would allow the 'Export...' function for a distribution list created in a folder under 'Private folders' - Transaction : SBWP
    The option is dimmed, although the user used has the SAP_ALL profile.
    Thanks.
    Reda

    click file/publish settings/flash/actionscript settings and make sure frame 1 is set as the export frame.
    if that's already set, don't use test scene.

  • Generate all the dates for given month

    Hi all,
    How can I generate all the dates for given month? For example If I give Feb-2008 then it should display all the dates from 01/02/2008 to 29/02/2007
    Thanks,
    Sujnan

    This question was expanded (and answered) at
    Monthly Report
    You can also search for "all days in month".

  • Randomly Generated Pixels

    Hi!
    I want to create a script that creates random (or near random) values for every single pixel of a document, similar to the "Add Noise..." filter, but with more control, such as "only b/w", "only grey", "all RGB" and "all RGB with alpha" and maybe even control over the probability distribution. Any idea how this could be tackled? Selecting every single pixel and applying a random color seems like something that would take a script hours...
    Why do I need this?
    I've started creating some filters in Pixel Bender (http://en.wikipedia.org/wiki/Adobe_Pixel_Bender). Since Pixel Bender doesn't really have any random generator (and workarounds are limited) I'm planning on passing on the random numbers through random pixel values. I'm well aware that this can only be used for filters in which Pixel Bender creates images from scratch, but that's the plan.
    Thanks!

    Understanding the details of the Add Noise filter is probably beyond the scope of just a short post.  Here is an approach to start learning what it does.
    - Take a 50% gray level and make it a Smart Object.
    -  Open up the historgram panel (should show a spike right at 50%)
    - Apply noise filter to Smart Object in monochrome building up from small percentages in small increments
    - You will notice that for this option above, you end up with a uniform probability function over the entire tonality spread at 50% applied for uniform distribution.
    There are a variety of ways to manipulate this function, through various blends.
    Please note a couple things
    1) I am using CS5 and though not documented anywhere that I have seen, the Noise Filter does work different than in CS4.  In CS4, if you run the same noise filter twice on two identical objects, my experience is that you get the identical bit for bit result ( a random pattern yet not independent of the next run of the filter).  Manipulating Probability Density Functions (PDFs) per my previous post requires that each run of the Noise Filter starts with a different "seed" so that the result is independent of the previous run.  CS5 does this where succesive runs will create an independent noise result.
    2) PS does not equally randomize R, G, and B.  There are ways to get around this yet wanted to give you a heads up.
    3) There are other ways to generate quick random patterns outside of PS and bring them in (using scripts).   You would need to understand the format of the Photoshop Raw file.  This type of file contains bytes with just the image pixel data.  These types of files are easy to create and then load into PS. From a script (or even faster call a Python script) create this file and then load it into PS as a Photoshop Raw format file and use as an overlay.  There is not question that this is faster than trying to manipulate individual PS pixesl through a script.
    4) Please not the under Color Settings there is an option called Dither.  If this is set, there are  times where PS adds nosie into the image (I leave mine turned off).  If is used in a number of places in PS other than what the documentation implies (more than just when moving between 8 bit color spaces)
    Good luck if you are going after making a plug-in.  I have never invested in that learning curve.  Good luck.

  • Cumulative Distribution Functions

    Like a fool, I sold my first year statistics textbook and have regretted it ever since. Help a guy out?
    It goes like this: I have two random variables, X and Y, for which I know the cumulative distribution functions. I have a third random variable Z, which is just X + Y. Is there a formula calculating the CDF of Z from the CDFs of X and Y?

    Your textbook wouldn't have helped you in this case
    because there is no such formula... for the generalSorry to contradict you but there is a 'formula' but it is not simple. The text book should have helped.
    The Distibution Function (DF) of the sum of two independent random variables is the convolution of the DF's of the two values. So if one differentiates the two CDFs to get the DFs, convolves the result to get the DF of the sum one can then integrate the convolution result to get the CDF of the sum (Wow).
    This can be done numerically using several appraches. I have used the Fast Fourier Transform to perform convolution, differentiation and integration but not in this combination. This would be one approach I would consider but it may not be the best.
    When I first saw this posted I looked at how one might create the CDF without differentiating the individual CDFs because this step is 'noisy' but I could not find a simple approach. I'm sure there is one but I have not found it.
    Using the central limit theorem to obtain an approximation to the PDF is feasable if X and Y have a uni-modal distribution and the variances are similar is size. In the good old days of my using an IBM 360 I used IBM's normal distribution random number generator that just added together 6 samples of a uniform random number generator.

  • A replacement for the Quicksort function in the C++ library

    Hi every one,
    I'd like to introduce and share a new Triple State Quicksort algorithm which was the result of my research in sorting algorithms during the last few years. The new algorithm reduces the number of swaps to about two thirds (2/3) of classical Quicksort. A multitude
    of other improvements are implemented. Test results against the std::sort() function shows an average of 43% improvement in speed throughout various input array types. It does this by trading space for performance at the price of n/2 temporary extra spaces.
    The extra space is allocated automatically and efficiently in a way that reduces memory fragmentation and optimizes performance.
    Triple State Algorithm
    The classical way of doing Quicksort is as follows:
    - Choose one element p. Called pivot. Try to make it close to the median.
    - Divide the array into two parts. A lower (left) part that is all less than p. And a higher (right) part that is all greater than p.
    - Recursively sort the left and right parts using the same method above.
    - Stop recursion when a part reaches a size that can be trivially sorted.
     The difference between the various implementations is in how they choose the pivot p, and where equal elements to the pivot are placed. There are several schemes as follows:
    [ <=p | ? | >=p ]
    [ <p | >=p | ? ]
    [ <=p | =p | ? | >p ]
    [ =p | <p | ? | >p ]  Then swap = part to middle at the end
    [ =p | <p | ? | >p | =p ]  Then swap = parts to middle at the end
    Where the goal (or the ideal goal) of the above schemes (at the end of a recursive stage) is to reach the following:
    [ <p | =p | >p ]
    The above would allow exclusion of the =p part from further recursive calls thus reducing the number of comparisons. However, there is a difficulty in reaching the above scheme with minimal swaps. All previous implementation of Quicksort could not immediately
    put =p elements in the middle using minimal swaps, first because p might not be in the perfect middle (i.e. median), second because we don’t know how many elements are in the =p part until we finish the current recursive stage.
    The new Triple State method first enters a monitoring state 1 while comparing and swapping. Elements equal to p are immediately copied to the middle if they are not already there, following this scheme:
    [ <p | ? | =p | ? | >p ]
    Then when either the left (<p) part or the right (>p) part meet the middle (=p) part, the algorithm will jump to one of two specialized states. One state handles the case for a relatively small =p part. And the other state handles the case for a relatively
    large =p part. This method adapts to the nature of the input array better than the ordinary classical Quicksort.
    Further reducing number of swaps
    A typical quicksort loop scans from left, then scans from right. Then swaps. As follows:
    while (l<=r)
    while (ar[l]<p)
    l++;
    while (ar[r]>p)
    r--;
    if (l<r)
    { Swap(ar[l],ar[r]);
    l++; r--;
    else if (l==r)
    { l++; r--; break;
    The Swap macro above does three copy operations:
    Temp=ar[l]; ar[l]=ar[r]; ar[r]=temp;
    There exists another method that will almost eliminate the need for that third temporary variable copy operation. By copying only the first ar[r] that is less than or equal to p, to the temp variable, we create an empty space in the array. Then we proceed scanning
    from left to find the first ar[l] that is greater than or equal to p. Then copy ar[r]=ar[l]. Now the empty space is at ar[l]. We scan from right again then copy ar[l]=ar[r] and continue as such. As long as the temp variable hasn’t been copied back to the array,
    the empty space will remain there juggling left and right. The following code snippet explains.
    // Pre-scan from the right
    while (ar[r]>p)
    r--;
    temp = ar[r];
    // Main loop
    while (l<r)
    while (l<r && ar[l]<p)
    l++;
    if (l<r) ar[r--] = ar[l];
    while (l<r && ar[r]>p)
    r--;
    if (l<r) ar[l++] = ar[r];
    // After loop finishes, copy temp to left side
    ar[r] = temp; l++;
    if (temp==p) r--;
    (For simplicity, the code above does not handle equal values efficiently. Refer to the complete code for the elaborate version).
    This method is not new, a similar method has been used before (read: http://www.azillionmonkeys.com/qed/sort.html)
    However it has a negative side effect on some common cases like nearly sorted or nearly reversed arrays causing undesirable shifting that renders it less efficient in those cases. However, when used with the Triple State algorithm combined with further common
    cases handling, it eventually proves more efficient than the classical swapping approach.
    Run time tests
    Here are some test results, done on an i5 2.9Ghz with 6Gb of RAM. Sorting a random array of integers. Each test is repeated 5000 times. Times shown in milliseconds.
    size std::sort() Triple State QuickSort
    5000 2039 1609
    6000 2412 1900
    7000 2733 2220
    8000 2993 2484
    9000 3361 2778
    10000 3591 3093
    It gets even faster when used with other types of input or when the size of each element is large. The following test is done for random large arrays of up to 1000000 elements where each element size is 56 bytes. Test is repeated 25 times.
    size std::sort() Triple State QuickSort
    100000 1607 424
    200000 3165 845
    300000 4534 1287
    400000 6461 1700
    500000 7668 2123
    600000 9794 2548
    700000 10745 3001
    800000 12343 3425
    900000 13790 3865
    1000000 15663 4348
    Further extensive tests has been done following Jon Bentley’s framework of tests for the following input array types:
    sawtooth: ar[i] = i % arange
    random: ar[i] = GenRand() % arange + 1
    stagger: ar[i] = (i* arange + i) % n
    plateau: ar[i] = min(i, arange)
    shuffle: ar[i] = rand()%arange? (j+=2): (k+=2)
    I also add the following two input types, just to add a little torture:
    Hill: ar[i] = min(i<(size>>1)? i:size-i,arange);
    Organ Pipes: (see full code for details)
    Where each case above is sorted then reordered in 6 deferent ways then sorted again after each reorder as follows:
    Sorted, reversed, front half reversed, back half reversed, dithered, fort.
    Note: GenRand() above is a certified random number generator based on Park-Miller method. This is to avoid any non-uniform behavior in C++ rand().
    The complete test results can be found here:
    http://solostuff.net/tsqsort/Tests_Percentage_Improvement_VC++.xls
    or:
    https://docs.google.com/spreadsheets/d/1wxNOAcuWT8CgFfaZzvjoX8x_WpusYQAlg0bXGWlLbzk/edit?usp=sharing
    Theoretical Analysis
    A Classical Quicksort algorithm performs less than 2n*ln(n) comparisons on the average (check JACEK CICHON’s paper) and less than 0.333n*ln(n) swaps on the average (check Wild and Nebel’s paper). Triple state will perform about the same number of comparisons
    but with less swaps of about 0.222n*ln(n) in theory. In practice however, Triple State Quicksort will perform even less comparisons in large arrays because of a new 5 stage pivot selection algorithm that is used. Here is the detailed theoretical analysis:
    http://solostuff.net/tsqsort/Asymptotic_analysis_of_Triple_State_Quicksort.pdf
    Using SSE2 instruction set
    SSE2 uses the 128bit sized XMM registers that can do memory copy operations in parallel since there are 8 registers of them. SSE2 is primarily used in speeding up copying large memory blocks in real-time graphics demanding applications.
    In order to use SSE2, copied memory blocks have to be 16byte aligned. Triple State Quicksort will automatically detect if element size and the array starting address are 16byte aligned and if so, will switch to using SSE2 instructions for extra speedup. This
    decision is made only once when the function is called so it has minor overhead.
    Few other notes
    - The standard C++ sorting function in almost all platforms religiously takes a “call back pointer” to a comparison function that the user/programmer provides. This is obviously for flexibility and to allow closed sourced libraries. Triple State
    defaults to using a call back function. However, call back functions have bad overhead when called millions of times. Using inline/operator or macro based comparisons will greatly improve performance. An improvement of about 30% to 40% can be expected. Thus,
    I seriously advise against using a call back function when ever possible. You can disable the call back function in my code by #undefining CALL_BACK precompiler directive.
    - Like most other efficient implementations, Triple State switches to insertion sort for tiny arrays, whenever the size of a sub-part of the array is less than TINY_THRESH directive. This threshold is empirically chosen. I set it to 15. Increasing this
    threshold will improve the speed when sorting nearly sorted and reversed arrays, or arrays that are concatenations of both cases (which are common). But will slow down sorting random or other types of arrays. To remedy this, I provide a dual threshold method
    that can be enabled by #defining DUAL_THRESH directive. Once enabled, another threshold TINY_THRESH2 will be used which should be set lower than TINY_THRESH. I set it to 9. The algorithm is able to “guess” if the array or sub part of the array is already sorted
    or reversed, and if so will use TINY_THRESH as it’s threshold, otherwise it will use the smaller threshold TINY_THRESH2. Notice that the “guessing” here is NOT fool proof, it can miss. So set both thresholds wisely.
    - You can #define the RANDOM_SAMPLES precompiler directive to add randomness to the pivoting system to lower the chances of the worst case happening at a minor performance hit.
    -When element size is very large (320 bytes or more). The function/algorithm uses a new “late swapping” method. This will auto create an internal array of pointers, sort the pointers array, then swap the original array elements to sorted order using minimal
    swaps for a maximum of n/2 swaps. You can change the 320 bytes threshold with the LATE_SWAP_THRESH directive.
    - The function provided here is optimized to the bone for performance. It is one monolithic piece of complex code that is ugly, and almost unreadable. Sorry about that, but inorder to achieve improved speed, I had to ignore common and good coding standards
    a little. I don’t advise anyone to code like this, and I my self don’t. This is really a special case for sorting only. So please don’t trip if you see weird code, most of it have a good reason.
    Finally, I would like to present the new function to Microsoft and the community for further investigation and possibly, inclusion in VC++ or any C++ library as a replacement for the sorting function.
    You can find the complete VC++ project/code along with a minimal test program here:
    http://solostuff.net/tsqsort/
    Important: To fairly compare two sorting functions, both should either use or NOT use a call back function. If one uses and another doesn’t, then you will get unfair results, the one that doesn’t use a call back function will most likely win no matter how bad
    it is!!
    Ammar Muqaddas

    Thanks for your interest.
    Excuse my ignorance as I'm not sure what you meant by "1 of 5" optimization. Did you mean median of 5 ?
    Regarding swapping pointers, yes it is common sense and rather common among programmers to swap pointers instead of swapping large data types, at the small price of indirect access to the actual data through the pointers.
    However, there is a rather unobvious and quite terrible side effect of using this trick. After the pointer array is sorted, sequential (sorted) access to the actual data throughout the remaining of the program will suffer heavily because of cache misses.
    Memory is being accessed randomly because the pointers still point to the unsorted data causing many many cache misses, which will render the program itself slow, although the sort was fast!!.
    Multi-threaded qsort is a good idea in principle and easy to implement obviously because qsort itself is recursive. The thing is Multi-threaded qsort is actually just stealing CPU time from other cores that might be busy running other apps, this might slow
    down other apps, which might not be ideal for servers. The thing researchers usually try to do is to do the improvement in the algorithm it self.
    I Will try to look at your sorting code, lets see if I can compile it.

Maybe you are looking for