Arithmetic with large numbers

Hi
I am developing code to find the poisson distribution. It contains computations with large scale decimal values
The below is the formula to find the poisson distribution.
(exp(-Rate)* Power(Rate, Count)/ Factorial(Count));
But the problem here is if the value of the rate increases the exp(-rate) value will be reaching towards zero.
Since double is of 8 bytes it's rounding off to zero after a certain value. I tried to use BigDecimal. But BigDecimal doesn't support exponential operations.
pl suggest any solution for this.
ur help is highly appreciated.

Double probably isn't rounding off to zero, it's just not printing all 53 bits of the mantissa. It can support numbers down to 1 * 2**-1024, if I'm not mistaken.
And BigDecimal supports exponentiation with the pow() method.

Similar Messages

  • Working with Large Numbers

    Hi there,
    I am currently doing a school assignment and not looking for answers but just a little guidance.
    I am working with large numbers and the modulo operator.
    I might have some numbers such as :
    int n = 221;
    int e = 5;
    int d = 77;
    int message = 84;
    int en = (int) (Math.pow(message, e) % n);
    int dn = (int) (Math.pow(en, d) % n);Would there be a better way to do this kind of calculation. The dn value should come out the same as message. But I always get something different and I think I might be losing something in the fact that an int can only hold smaller values.

    EJP wrote:
    It might make sense in some contexts to have a positive and negative infinity.
    Yes, perhaps that's a better name. Guess I was harking back to old COBOL days :-).(*)
    But the reason these things exist in FP is because the hardware can actually deliver them. That rationale doesn't apply to BIgInteger.Actually, it does. All I'm talking about is a value that compares higher or lower than any other. That could be done either by a special internal sign value (my slight preference) or by simply adding code to compareTo(), equals() and hashCode() methods that takes the two constants into account (as they already do with ZERO and ONE).
    Don't worry, I'm not holding my breath; but I have come across a few situations in which values like that would have been useful.
    Winston
    Edited by: YoungWinston on Mar 22, 2011 9:07 AM
    (*) Actually, '±infinity' tends to suggest a valid arithmetic value, and I wasn't thinking of changing existing BigInteger/BigDecimal maths (except perhaps to throw an exception if either value is involved).

  • Business Partner records with large numbers of addresses -- Move-in issue

    Friends,
    Our recent CCS implementation (ECC6.0ehp3 & CRM2007) included the creation of some Business Partner records with large numbers of addresses.  Most of these are associated with housing authorities, large developers and large apartment complex owners.  Some of the Business Partners have over 1000 address records and one particular BP has over 6000 addresses that were migrated from our Legacy System.  We are experiencing very long run times to try to execute move in's and move out's due to the system reading the volume of addresses attached to the Business Partner.  In many cases, the system simply times out before it can execute the transaction.  SAP's suggestion is that we run a BAPI to cleanse the addresses and also to implement a BADI to prevent the creation of excess addresses. 
    Two questions surrounding the implementation of this code.  Will the BAPI to cleanse the addresses, wipe out all address records except for the standard address?  That presents an issue to ensure that the standard address on the BP record is the correct address that we will have identified as the proper mailing address.  Second question is around the BADI to prevent the creation of excess addresses.  It looks like this BADI is going to prevent the move in address from updating the standard address on the BP record which in the vast majority of cases is exactly what we would want. 
    Does anyone have any experience with this situation of excess BP addresses and how did you handle the manipulation and cleansing of the data and how do you maintain it going forward?
    Our solution is ECC6.0Ehp3 with CRM2007...latest patch level
    Specifically, SAP suggested we apply/review these notes:
    Note 1249787 - Performance problem during move-in with huge addresses
    **applied this ....did not help
    Note 861528 - Performance in move-in for partner w/ large no of addresses
    **older ISU4.7 note
    Directly from our SAP message:
    use the function module
    BAPI_BUPA_ADDRESS_REMOVE or run BAPI_ISUPARTNER_CHANGE to delete
    unnecessary business partner addresses.
    Use BAdI ISU_MOVEIN_CUSTOMIZE to avoid the creation of unnecessary
    business partner addresses (cf. note 706686) in the future for that
    business partner.
    Note 706686 - Move-in: Avoid unnecessary business partner addresses
    Does anyone have any suggestions and have you used above notes/FMs to resolve something like this?
    Thanks,
    Nick

    Nick:
    One thing to understand is that the badi and bapi are just the tools or mechanisms that will enable you to fix this situation.  You or your development team will need to define the rules under which these tools are used.  Lets take them one at a time.
    BAPI - the bapi for business partner address maintenance.  It would seem that you need to create a program which first read the partners and the addresses assigned to them and then compares these addresses to each other to find duplicate addresses.  These duplicates then can be removed provided they are not used elsewhere in the system (i.e. contract account).
    BADI - the badi for business partner address maintenance.  Here you would need to identify the particular scenarios where addresses should not be copied.  I would expect that most move-ins would meet the criteria of adding the address and changing the standard address.  But for some, i.e. landlords or housing complexes, you might not add an address because it already exists for the business partner, and you might not change the standard address because those accounts do not fall under that scenario.  This will take some thinking and design to ensure that the address add/change functions are executed under the right circumstances.
    regards,
    bill.

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Handling tables with large numbers of fields

    Hi
    What is the best practice to deal with tables having large numbers of fields? Ideally, I would like to create folders under a Presentation Table and group fields into folders (and leave fields that may be needed rarely in a folder named 'Other Information').
    Is there a way to do this in Oracle BI? Any alternatives?
    Thanks

    Answering my own question:
    http://oraclebizint.wordpress.com/2008/01/31/oracle-bi-ee-10133-nesting-folders-in-presentation-layer-and-answers/
    This is definitely a working solution (creating multiple tables and entering '->' in their description in order for them to act as subfolders). Definitely not intuitive and extremely ugly, especially since reordering tables and columns isn't possible (or is it? in another non-obvious way? )
    Anyway it seems we have to live with this.

  • Takes long time to drpo tables with large numbers of partitions

    11.2.0.3
    This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process.
    This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on.
    anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds.
    I can't change the build process. My only option is to try to make this run a little faster. I can't do anything about the hardware (lots of VMs crammed onto too few servers).
    This is not a production issue. Its more of a hassle.

    Support Note 798586.1 shows that DROP USER CASCADE was slower than dropping individual objects -- at least in 10.2    Not sure if it still the case in 11.2
    Hemant K Chitale

  • VS 2013 SP4 crashes when opening a Form with large Numbers of elements on Windows 8 32-Bit

    Hi, I have made a C#-Project, which is fully functional but opening the main form causes VS 2013 Sp4 to crash. I can build the solution an open all Files. Only when i open the form in Designer, VS crashes and want to restart. There are no Log-Entrys. Only
    in the event-log there is an Event-Id 1000 with code 0x0000409. I can open the form on two other Windows 8.1 64-Bit machines without Problems. Looking on "free Mem" while opening in Resourcemonitor it crashes when at least are around 190 MB free.
    How can I investigate this Problem?

    Hi Frank,
    If the same solution works well in other machine, I doubt that it is your Environment issue.
    Of course, to make sure that it is not the project files issue, please create a blank solution, and then add all project files to this new solution, test it again.
    Maybe you could delete .suo file in your solution folder, and then re-open your .sln file, test it again.
    If there are many projects in the same solution, I suggest you create different solutions for them, test it again.
    But as your previous description, I doubt that it would be related to your VS/Windows Environment.
    Please disable all add-ins in your VS, maybe you could run your VS in safe mode, test it again.
    http://msdn.microsoft.com/en-us/library/ms241278.aspx
    To make sure that it is not the account issue, please run your VS as the admin.
    In addition, the most important issue is that we have make sure that it is not the Window Configuration issue.
    For example, when you run your windows for a long time, or your task manager is so busy, or other processes take high CPU and so on, I think they will impact your VS or other software performance.
    Please restart your PC, close other processes which take high memory, please also close third party tools like firewall or the Anti-virus. If you open a few VS Editors, just close them, and then just open one, test it again.
    Best Regards,
    Jack 
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Looking for BPC user contacts with large numbers of dimension members

    I would like to make contact with other BPC users that have multiple dimensions with dimension members in excess of 10K.
    Thanks,
    Cary Schulz
    Newfield Exploration
    281-674-2004

    Hi Helene,
    With 3,000 members you should not be experiencing these problems, if your report is designed properly.
    I'm running BPC 5.1 SP8 on SQL 2005, with a dimension containing 22,000 members. Client PC's are typical (XP, Excel 2007, 1 or 2 gig RAM).
    Using EVDRE and this dimension expanding on the rows, most reports & input schedules can expand & refresh in the range of 10 to 30 seconds.
    The faster times are when I use a row expansion memberset using dimension properties, such as Active="X". The slower times are when the memberset is hierarchy-based, such as BAS.
    If you're using a dynamic template (one using EVEXP for the expansion) then you should start over using EVDRE. It will be much faster, particularly if you optimize your row expansion.
    It's often a good idea to add dimension properties specifically for the purpose of optimizing the report expansion, if the dimension has thousands of members. I sometimes go as far as to add properties which mimic the hierarchy (MyLevel2, MyLevel3, etc) just for this purpose.

  • Zen X-Fi Style has problems with large numbers of tracks

    Hi.
    I use my Zen X-Fi Style to listen to audio books. But if the audio book has more than 250 or so tracks(256 maybe) the player are unable to list the tracks in correct order. The? tracks are listed in correct order in Creative Centrale but in the player they are listed in the following order:? 256, , 257, 2, 258, 3. etc.
    Anyone experienced the same?

    Hi Kokchoy,
    sadly this is happening every start-up.
    I already tried the following:
    copy files to the player by explorer
    copy files with Windows Media Player (Win7) --> synchronizing
    --> But nothing changes yet.
    It went faster when only 8 files were on the player.
    Also:
    Everytime the player was connected to my laptop, it starts "reorganizing" afterwards, although no file transfer has taken place. Reorganizing takes up to 2 to 4 min, too.
    currently I tried out firmware versions .00.04 and .00.05e. The behaviour stays exactly the same.

  • FR Layout issue with large number of columns

    Hi!
    I'm developing a report in FR 11.1.1.3 with over 30 columns.
    The issue is that when I run the report in web preview, the dropdown of dimension in page goes to the far right and disappears from the display.
    If I reduce the number of the columns I don't have this problem.
    I've already tried to maximize the workspace to the maximum without any result.
    Can anyone help me to deal with reports with large numbers of columns?
    Regards,
    Luís
    Edited by: luisguimaraes on 13-Mar-2012 06:48

    IE8 could be the reason. According to the supported platform matrices (http://www.oracle.com/technetwork/middleware/bi-foundation/oracle-hyperion-epm-system-certific-2-128342.xls), check tab EPM System Basic Platform, row 70, in order IE8 to work, FR and Workspace should be patched.
    FR Patch number: 9657652
    Workspace Patch number: 9314073
    Patches can be found on My Oracle Support. Just search for the patch number.
    Cheers,
    Mehmet

  • Very Large Numbers Question

    I am a student with a question about how Java handles very large numbers. Regarding this from our teacher: "...the program produces values that
    are larger than Java can represent and the obvious way to test their size does not
    work. That means that a test that uses >= rather than < won?t work properly, and you
    will have to devise something else..." I am wondering about the semantics of that statement.
    Does Java "know" the number in order to use it in other types of mathematical expressions, or does Java "see" the value only as gibberish?
    I am waiting on a response from the teacher on whether we are allowed to use BigInteger and the like, BTW. As the given program stands, double is used. Thanks for any help understanding this issue!

    You're gonna love this one...
    package forums;
    class IntegerOverflowTesterator
      public static void main(String[] args) {
        int i = Integer.MAX_VALUE -1;
        while (i>0) {
          System.out.println("DEBUG: i="+i);
          i++;
    }You also need to handle the negative case... and that get's nasty real fast... A positive plus/times a positive may overflow, but so might a negative plus a negative.
    This is decent summary of the underlying problem http://mindprod.com/jgloss/gotchas.html#OVERFLOW.
    The POSIX specification also worth reading regarding floating point arithmetic standards... Start here http://en.wikipedia.org/wiki/POSIX I guess... and I suppose the JLS might be worth a look to http://java.sun.com/docs/books/jls/second_edition/html/typesValues.doc.html

  • ETEXT output truncating/rounding large numbers

    I have an eTEXT template that includes the definition:
    <FORMAT>     <DATA>
    NUMBER BankAccount/BankAccountNumber
    When a large number is encountered the output appears truncated/rounded. For example; the account number 444470301880513641 is output as 444470301880513660.
    Data:
    <BankAccountNumber>444470301880513641</BankAccountNumber>
    Output:
    ...#444470301880513660#....
    Other than convert the FORMAT to an Alpha are there any known tips or tricks to correct this scenario?
    Regards
    Edited by: bdansie on 10-Jan-2012 17:44

    Hi Tom, Thanks for the reply. I am reading a hex value in from a serial port. the number is large and when i format it as hex on one chan it is off by a small amount. there is some rounding in the LSD. i then take another reading later and calculate the delta. since i dont have the right values to begin with my difference calculation is wrong. when i read as bytes through 8 channels, i can see the ascii for each digit and that they are correctly displayed. using a formula module i can convert from ascii to decimal so that i get the decimal equivalent of the hex character then in the next formula i do the math to find the value of each hex digit in place it holds. then using a sum arithmetic module i get the final value of the large number coming in. it is correct all the way upto the aritmetic sum. i tried cutting the large hex number into two parts and then adding up the weighted parts and still have the wrong ans in the display module. i also tried dividing the halves by 1000 prior to adding them so that i was working with smaller numbers in the summation but that didnt help.
    so i did the math directly in the extended portion of the variables. the numbers add up properly there but when i try to bring the correct sum back into the work sheet to display it, it is wrong again. it seems that a value around 04000000 hex is the limit. below that i get the right value displayed that was calculated in the variable field, above it there is some degree of variation. I can set the limit of cycles to a value below where the addition becomes problematic or i can export the hex to a spreadsheet, do the math there and then bring it back in but i will still have the same issue displaying the answer.
    the limitation doesnt seem to be in DASYLab in general but in the Read, Formula, Constant Generator modules that read the variable back into the worksheet. it is displayed properly in the contents window

  • Error adding large numbers

    I am adding large numbers and getting the wrong result. there seems to be some rounding taking place in the sum but i am adding integers. I am using DASYLab 9.02, data is summed in the arithmetic module, example problem 331153408-31570 = 331121838 but the output is 331121824. I tried making the variable where the inputs are stored 20 digit with 10 decimals but that did not help and i also tried dividing first by 1000 and 10000 only to get different answers. is there a setting that needs to be configured differently?

    Hi Tom, Thanks for the reply. I am reading a hex value in from a serial port. the number is large and when i format it as hex on one chan it is off by a small amount. there is some rounding in the LSD. i then take another reading later and calculate the delta. since i dont have the right values to begin with my difference calculation is wrong. when i read as bytes through 8 channels, i can see the ascii for each digit and that they are correctly displayed. using a formula module i can convert from ascii to decimal so that i get the decimal equivalent of the hex character then in the next formula i do the math to find the value of each hex digit in place it holds. then using a sum arithmetic module i get the final value of the large number coming in. it is correct all the way upto the aritmetic sum. i tried cutting the large hex number into two parts and then adding up the weighted parts and still have the wrong ans in the display module. i also tried dividing the halves by 1000 prior to adding them so that i was working with smaller numbers in the summation but that didnt help.
    so i did the math directly in the extended portion of the variables. the numbers add up properly there but when i try to bring the correct sum back into the work sheet to display it, it is wrong again. it seems that a value around 04000000 hex is the limit. below that i get the right value displayed that was calculated in the variable field, above it there is some degree of variation. I can set the limit of cycles to a value below where the addition becomes problematic or i can export the hex to a spreadsheet, do the math there and then bring it back in but i will still have the same issue displaying the answer.
    the limitation doesnt seem to be in DASYLab in general but in the Read, Formula, Constant Generator modules that read the variable back into the worksheet. it is displayed properly in the contents window

  • Problem with  large databases.

    Lightroom doesn't seem to like large databases.
    I am playing catch-up using Lightroom to enter keywords to all my past photos. I have about 150K photos spread over four drives.
    Even placing a separate database on each hard drive is causing problems.
    The program crashes when importing large numbers of photos from several folders. (I do not ask it to render previews.) If I relaunch the program, and try the import again, Lightroom adds about 500 more photos and then crashes, or freezes again.
    I may have to go back and import them one folder at a time, or use iView instead.
    This is a deal-breaker for me.
    I also note that it takes several minutes after opening a databese before the HD activity light stops flashing.
    I am using XP on a dual core machine with, 3Gigs of RAM
    Anyone else finding this?
    What is you work-around?

    Christopher,
    True, but given the number of posts where users have had similar problems ingesting images into LR--where LR runs without crashes and further trouble once the images are in--the probative evidence points to some LR problem ingesting large numbers.
    I may also be that users are attempting to use LR for editing during the ingestion of large numbers--I found that I simply could not do that without a crash occuring. When I limited it to 2k at a time--leaving my hands off the keyboard-- while the import occured, everything went without a hitch.
    However, as previously pointed out, it shouldn't require that--none of my other DAMs using SQLite do that, and I can multitask while they are ingesting.
    But, you are right--multiple single causes--and complexly interrated multiple causes--could account for it on a given configuration.

  • Can't Empty Trash With Large Number of Files

    Running OS X 10.8.3
    I have a very large external drive that had a Time Machine backup on the main partition. At some point, I created a second partition, then started doing backups on the new partition. On Wed, I finally got around to doing some "housecleaning" tasks I'd been putting off. As part of that, I decided to clean up my external drive. So... I moved the old, unused and unwanted Backups.backupdb that used to be the Time Machine backup, and dragged it to the Trash.
    Bad idea.
    Now I've spent the last 3-4 days trying various strategies to actually empty the trash and reclaim the gig or so of space on my external drive.  Initially I just tried to "Empty Trash", but that took about four hours to count up the files just to "prepare to delete" them. After the file counter stopped counting up, and finally started counting down... "Deleting 482,832 files..." "Deleting 482,831 files..." etc, etc...  I decided I was on the path to success, so left the machine alone for 12-14 hours.
    When I came back, the results were not what I expected. "Deleting -582,032 files..."  What the...?
    So after leaving that to run for another few hours with no results, I stopped that process.  Tried a few other tools like Onyx, TrashIt, etc...  No luck.
    So finally decided to say the **** with the window manager, pulled up a terminal, and cd'ed to the .Trash directory for my UID on the USB volume and did a rm -rfv Backups.backupdb
    While it seemed to run okay for a while, I started getting errors saying "File not found..." and "Invalid file name..." and various other weird things.  So now I'm doing a combination of rm -rfing individual directories, and using the finder to rename/cleanup individual Folders when OSX refuses to delete them.
    Has anyone else had this weird overflow issue with deleting large numbers of files in 10.8.x? Doesn't seem like things should be this hard...

    I'm not sure I understand this bit:
    If you're on Leopard 10.5.x, be sure you have the "action" or "gear" icon in your Finder's toolbar (Finder > View > Customize Toolbar).  If there's no toolbar, click the lozenge at the upper-right of the Finder window's title bar.  If the "gear" icon isn’t in the toolbar, selectView > Customize Toolbar from the menubar.
    Then use the Time Machine "Star Wars" display:  Enter Time Machine by clicking the Time Machine icon in your Dock or select the TM icon in your Menubar.
    And this seems to defeat the whole purpose:
    If you delete an entire backup, it will disappear from the Timeline and the "cascade" of Finder windows, but it will not actually delete the backup copy of any item that was present at the time of any remaining backup. Thus you may not gain much space. This is usually fairly quick
    I'm trying to reclaim space on a volume that had a time machine backup, but that isn't needed anymore. I'm deleting it so I can get that 1GB+ of space back. Is there some "official" way you're supposed to delete these things where you get your hard drive space back?

Maybe you are looking for

  • Take 45s to boot a brand new Mac Pro

    Hi Mac Pro folks I just got a brand new Mac Pro running Mountain Lion. It came with a 512MB SSD as the boot disk and a 2TB disk as the second drive. What I got confused is: after powering it on, the Mac Pro takes average 45s to boot to the white scre

  • Bad versions also for different color and contrast!!!!! Help

    hi, the master sharper than version is an issue for me also knowing that is due to antialiasing in monitor scaling (I print 200+ sport shoots in 1 hour at the end of ski races after racers seen them on monitor and give me ok for print - and need a fa

  • Online album photo orientation incorrectly converts all pictures to landscape

    My online photoshop album photos are always converted to lanscape, despite the pictures having the correct orientation in the album on PSE v7.0 before they are uploaded. This is frustrating when viewing the slide show as a lot of pics are on their si

  • I want to revert to the original version of Photoshop Express for ipad?

    The new update isn't a patch on the original. Adobe have taken out most of the wonderful special effects and left us with a less than basic photo editing app. What are they thinking of?  Can I revert back or is it gone forever!  

  • Windows 7 claiming it's not genuine

    I re-installed my Windows 7 Home Premium operating system. After a couple of weeks it started to claim that my Windows wasn't genuine. My operating came with the computer and now it doesen't approve my activation code. And I really don't know what to