Any suggestions on calculating with very large or small numbers?

It seems that double values are about 17 decimal places (10e17) in precision.
Is there a way in iPhone calculations to get more precision for very large and small numbers, like 10e80 and so forth? I know that's more than the entire number of atoms in the universe, but still.
I tried "long double" but that didn't seem to make any difference.
Just a limitation?
Thanks,
doug

Hmmm... maybe I was just having a problem with my formatted string then?
I was using the NSString %g format, which is supposed to print in exponential notation if the number is greater than 1e4 or less than 1e-4, or something like that.
But I was not getting anything greater exponents than 1e17 and then I was apparently getting overflows because the number were having negative mantissas.
All the variables involved were double...
How did you "look at" z?
Thanks,
doug

Similar Messages

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, how to get back standard sized homepage?

    Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, need to return to standard sized homescreen?

    Triple click the Home button and then go to Settings>General>Accessibility and turn Zoom off. If problems see:
    iPhone: Configuring accessibility features (including VoiceOver and Zoom)

  • Hi there, I changed my passcode but forgot it and ive tried to back my phone up but i need to type in my old passcode! any suggestions? thank you very much

    hi there, I im 13 years of age and i recently changed my passcode but forgot it and ive tried to back my phone up and i dont really want to restore it without backing it up and  i need to type in my old passcodeto back it up ! any suggestions? thank you very much appreciate your help

    If you've never synced while this passcode was enabled, you will not be able to backup now without first entering the passcode. bbfc's post is incorrect.
    You will have to force the phone into recovery mode & restore it, as described here:
    http://support.apple.com/kb/ht1212
    If you don't have an existing backup to restore to your phone, then you will lose all of your data, as your only option will be to restore as new to remove the passcode.

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Bridge CS4 with very large image archves

    I have just catalogued 420,000 images from 3TB of my photographic collection. For those interested, the master cache files became quite large and took about 9 days of continuous processing:
    cache size: 140 gb
    file count: 991,000
    folder count: 3000
    All cache files were also exported to the disk directories.
    My primary intent was to use the exported cache files as a "quick" browsing mechanism with bridge. Of course, "quick" is a rather optimistic word, however is is very significantly faster than having bridge rebuild caches as needed.
    I am now trying to decide if it is worth keeping the master bridge cache because of the limitations of the bridge implementation which is not very flexible as to where and when the master cache adds new temporary cache entries.
    Any suggestions as to the value of keeping the master cache would be appreciated. I don't really need key word or other rating systems since I presently use a simple external data base for this type for image location.
    I am also interested in knowing if the "500,000" entry cache limitation is real - or if more than 500,000 images can be stored in the master cache since I will be exceeding this image count next year.

    I have a bridge 5 cache system with 600,000 images over 8 TB of networded disk.  I too use this to "speed up" the browsing process and rely primarily on key word processing to group images.  The metadata indexing is, for practical purposes, totally useless (it never ceases to amaze me about why Adobe things it useful for me to know how many images were taken with a lens focal length of 73mm - or some other equally useless statistic).  The only thing I can think of that is are serious missing keyword indexing feature is the ability to have a key word associated with a directory.  For example, I have shot many dozens of dance, theatre, music and sports productions - it would be much more useful to cataloge a directory that has the key words "Theatre" and "Romeo and Juliette" than attempt to key-word each individual image.   It is, of course, possible to work around the restrictions but that is very unclean and certainly less than desireable.   Key-wording a project (i.e. a directory) is a totally different kettle of fish than key-wording an image.  I also find the concept of the "collection" is very useful and well implemented.
    I do maintain a complete cache build of my system.  It is spread over two master caches, one for the first 400,000 images and a second for the next 400,000 (I want to stay within the 500,000 cache size limit - it is probabley associated with the MYSQL component of Bridge and I think may have problems if you exceed the limit by a substantial amount.  With Bridge on CS3, when the limit was exceeded , the cache system self-distructed and I had to rebuild).
    The only thing I can think of (and that seems to be part of Adobe's design) is that Bridge will rebuild the master cache for a working directory "for no apparent reason" such as when certain (unknown) changes are made to ACR,   Other automatic rebuilds have been reported by others however Adobe does not comment upon when or what casuses a rebuild.  Of course, this has serious impact upon getting work done - it is a bloody pain to have bridge suddenly process 1500 thumbs and preview extracts simply to keep the master cache completely and perfectly synchronized (in terms of image quality) with what might be displayed if you happen if you want to load a raw image into photoshop.  This strategy is IMHO completely out of step with how (at least I) use the browsing features of bridge.
    It may be of concern that Adobe may, for design reasons, change the format of the directory cache files and you will have to completely rebuild all master and directory caches yet again - which is a real problem if you have many hundreds of thousands of images.  This happened when the cache system changed from CS3 to CS4 - and Adobe did not provide conversion programme to migrate the old to new format.  This significantly adds to the rebuild time since each raw image must be completely reprocessed.  My current rebuild of the master cache has taken over two elapsed weeks of contunuous running.
    It would be nice if Adobe would allow some control over what is recorded in the master cache - for example, "do you wish meta data to be indexed".
    (( as an aside, adobe does not comment upon why, when using Bridge to import images from a CF card, results in the building a .xmp file with nothing but meta data for each raw file.  I am at a loss to speculate what really useful thing results other than maybe speeding up the processing of the (IMHO useless) aspects of meta data ))
    To answer your quiestion, I do think the master cache is worth keeping - and we can pray that Adobe puts more though process into why the master cache exists and who uses the present type of information indexed within the cache.

  • Problem in compilation with very large number of method parameters

    I have java file which I created using WSDL2Java. Since the actual WSDL has a complex type with a large number of elements(around 600) in it, Consequently the resulting java file(from WSDL2Java) has a method that takes 600 parameters of various types. When I try to compile it using javac at command prompt, it says "Too many parameters" and doesn't compile. The same is compiling successfully using JBuilder X . The only way I could compile successfully at command prompt is by reducing the number of parameters to around 250 but unfortunately that it's not a workable solution. Does Sun specify any upper bound on number of parameters that can be passed to a method?

    ... a method that takes 600 parameters ...Not compatible with the spec, see Method Descriptors.
    When I try to compile it using javac at
    command prompt, it says "Too many parameters" and
    doesn't compile.As it should.
    The same is compiling successfully using JBuilder X .If JBuilder produces a class file, that class file may very well be invalid.
    The only way I could compile
    successfully at command prompt is by reducing the
    number of parameters to around 250Which is what the spec says.
    but unfortunately that it's not a workable solution.Pass an array of objects - an array is just one object.
    Does Sun specify
    any upper bound on number of parameters that can be
    passed to a method?Yes.

  • Need help with "Very large content bounds" error...

    Hey guys,
    I've been having an issue with Adobe Muse [V7.0, Build 314, CL 778523] - one of the widgets I tried from the Exchange library seemed to bug out and created a large content box.
    This resulted in this error:
    Assert: "Very large content bounds (W:532767.1999999997 H:147446.49743999972) detected in BoxUtils::childBounds"
    Does anyone know how I could fix this issue?

    Hi there -
    Your file has been repaired and emailed back to you. Please let us know if you run into any other issues.
    Thanks!
    -Sam

  • Since getting version 22 (release update channel) I had a few days with very large text; today, Firefox will not start. How to get it running again???

    I've searched for an .exe file in Firefox program files, but there doesn't seem to be one anywhere. I'd like to uninstall, download a new program and reinstall, but I'd rather not lose bookmarks and other settings.
    Running Windows 7 on a Sony Vaio laptop. Any suggestions? Thanks in advance.

    Certain Firefox problems can be solved by performing a ''Clean reinstall''. This means you remove Firefox program files and then reinstall Firefox, you WILL NOT lose any bookmarks, history, and settings. Please follow these steps:
    '''Note:''' You might want to print these steps or view them in another browser.
    #Download the latest Desktop version of Firefox from http://www.mozilla.org and save the setup file to your computer.
    #After the download finishes, close all Firefox windows (click Exit from the Firefox or File menu).
    #Delete the Firefox installation folder, which is located in one of these locations, by default:
    #*'''Windows:'''
    #**C:\Program Files\Mozilla Firefox
    #**C:\Program Files (x86)\Mozilla Firefox
    #*'''Mac:''' Delete Firefox from the Applications folder.
    #*'''Linux:''' If you installed Firefox with the distro-based package manager, you should use the same way to uninstall it - see [[Installing Firefox on Linux]]. If you downloaded and installed the binary package from the [http://www.mozilla.org/firefox#desktop Firefox download page], simply remove the folder ''firefox'' in your home directory.
    #Now, go ahead and reinstall Firefox:
    ##Double-click the downloaded installation file and go through the steps of the installation wizard.
    ##Once the wizard is finished, choose to directly open Firefox after clicking the Finish button.
    Please report back to see if this helped you!

  • I cannot log on to my Comcast e-mail from my IMac anymore.  This is a new issue, recently I had to delete cookies from my system in order so my email wasn't frozen, but over the past few days it has went from bad to worse.  Any suggestions-I am not very c

    I cannot log on to my Comcast e-mail from my IMac anymore.  This is a new issue, recently I had to delete cookies from my system in order so my email wasn’t frozen, but over the past few days it has went from bad to worse.  I am able to enter my login information, but it just goes right back to the home page.  Any suggestions…maybe I deleted something else in the process.

    OK, restart in Safe Mode, this will clear some caches. It's possible one or more is corrupt. To restart in Safe Mode when you hear the start up tone hold down the Shift Key until you see a progress bar. Let it fully boot then restart normally and test.
    Also I am assuming you have checked Finder - Preferences  - General and see what boxes are checked in "Show these items on desktop." You can also mount an item in Disk Utility, simply highlight it and then look in the File menu for Mount.....

  • I-phone unlock screen has very large letters and numbers.

    I-phone has large letters and numbers on the unlock screen. can see only about 1/4 of the screen it's so large. can't see the slide to unlock bar, only part of the actual time. will not compress with fingers. phone recieves e-mail/phone calls but can't unlock it. tried restore twice and shutting phone off. pulled SIM card too. when it's restoring the screen is normal size as soon as it's finished it goes back to super -size.

    I'm having exactly the same problem, but It doesn't do a thing when I tap it twice or even three times. It's just locked on my screen saver. I can unlock it with some difficulty, because the screen is so enlarged. But, then I just get a big screen with numbers. I've reset it multiple ways, but with no luck.

  • Working with VERY LARGE tables - is it possible to bypass row counting?

    Hello!
    For working with large result sets ADF provides the `Range Paging` mechanism for views, described in the 27.1.5 part of the Developer’s Guide For Forms/4GL Developers.
    It works well, but as a common mode it counts total row count to allow paging. In some cases query `select count(1) from (SELECT ...)...` can take very, very long time.
    But if a view object doesn't know row count (for example we can override getEstimatedRowCount() method ), paging controls doesn't appear in user interface.
    Meanwhile I suggest that it's possible to display two paging links - Prev and Next, without knowing row count. Is it a way to do it?
    Thank in advance,
    Ilya Rodionov.

    Hi Ilya,
    while you wait for Franks to dig up the right sample you can read this thread:
    Re: ADF BC: Performance issue with getEstimatedRowCount (ER?)
    There we discuss the exact issue.
    Timo

  • Best Practice - Analyze table with very large partitions

    We have a table that contains 100 partitions with about 20m rows in each. Right now the analyze is taking about 1 hour per partition. The table is used for reporting and will have a nightly load of the previous days data.
    What would be the best way to analyze this table? Besides using a low value for ESTIMATE and using low GRANULARITY.
    Thank You.

    Are you suggesting that the table is so big, its not feasible to analyze anymore?I'm suggesting it's not necessary. I think it's highly unlikely that a nightly load is going to change the stats in any meaningful way, unless you are loading millions of rows. the law of diminishing returns has kicked in.
    Remember, the standard advice from Oracle is to gather statistics once and then only bother refreshing those stats when we need to. From Metalink note #44961.1:
    "Given the 'best' plan is unlikely to change, frequent gathering statistics has no benefit. It does incur costs though."
    What you might find useful is to export the stats from your last run before you do the new run (you should do this anyway). Then after the next stats refresh import both sets of stats into dummy schemas and compare them. If the difference is significant then you ought to keep analysing (especially if yours is a DSS or warehousing database). But if they are broadly the same then maybe it's time to stop.
    Cheers, APC

  • Power of function with very large numbers & HEX array

    Hello,
    I'm haveing 3 problems and I would be greatful if someone can help. I've attached
    1) I need to calculate 982451653^15. I've used 'Power of X' function but the resut I'm getting is incorrect.
    is there a way for getting correct result??
    2) after that I need to calculate modulo from result but I get nothing? I'm using 'Quotient & Reminder' function
    3) I need to transform number 982451653 to HEX --> 3A8F05C5 and send to array gruped by two from behind as shown below:
    3A8F05C5 --> [3A][8F][05][C5] and write it down to array from behind.
    Array should be:
    ...and for hex number 3A8F05C56 --> [03][A8][F0][5C][56]
    Array: 
    Please help!
     

    Just for "fun", I decided to take my own suggestion and write a Big Number project to handle Addition and Multiplication of Arbitrarily-Long Integers.  I built in "sign" handling for Multiplication, but (in the interest of getting a "testable") I currently only support non-negative Addition (and no Subtraction, yet -- it should be a fairly easy, and you'll forgive the accidental pun, Add-On).  The Project has 11 sub-VIs, including Product, Sum, and Power, plus one designed for output called "Big Number String" (currently only a Decimal string is supported).  I was not necessarily "coding for speed of execution", but rather for clarity of operation and ease of "proving that this works".
    I tried it out on your problem.  I got out a 135-digit decimal number that appears to match what you posted as the Correct Answer (it starts with 76677477 ... and ends with ...35294157).  It executes in about 20 milliseconds.
    Just for fun, I also coded up a computation of 10000! (after reading Altenbach's post).  I was not aware of the Factorial Challenge, and haven't look at the Post he cited, so am unsure how my algorithm compares with the 100-millisecond champ.  I'm definitely slower -- about 37 seconds, and while I didn't print out the result, I got 35,660 digits, one more than what is noted in Christian's Post.  However, you can Google Factorial 10000 and find its value posted on the Web -- my answer agrees with the posted value for the first (most-significant) 20-or-so digits that I compared.
    For the time being, I'm going to skip over how to convert this monster decimal string representation of a number to a hex representation -- my suspicion is that it will be easier to write a Hex Package to do the same calculation (and to define an inherently Hexy format to store the arbitrary-precision number) than to try to write a direct Conversion routine.  I'll leave this task (as well as creating one's own Big Number Project) as an "Exercise for the Reader".  Consider this an Existence Proof.
    Bob Schor
     

  • Pivot table with very large number of columns

    Hello,
    here is the situation:
    One table that contains raw data; from this table I feed one with extract information (3 fields); I have to turn the content in a pivot table
    Ro --- Co --- Va
    A A 1
    A B 1
    A C 2
    B A 11
    Turned in
    A B C...
    A 1 1 2
    B 11 null null
    To do this I do a query like:
    select r, sum(decode(c,'A',Va) COLA, sum(decode(c,'B',Va) COLB , sum(decode(c,'C',Va) COLC,.... sum(decode(c,'XYZ',Va) COLXYZ from table group by r
    The statement is generated by a script (cfmx) and it works until I reach a query that try to have 672 values for c; which means 672 columns...
    Oracle doesn't like that: ORA-01467: sort key too long
    I like this way has it is getting the result fast.
    I have tried different solution a the CFMX level with for that specific query, I got timeout (query table with loop on co within loop on ro)
    Is there any work around?
    I am using Oracle 9i.
    Tahnk you!

    insert into extracted_data select c, r, v, p from full_data where <specific_clause>
    The values for C are from a query: select disctinct c from extracted_data
    and it is the same for R
    R and C are varchar2(3999)
    I suppose that I can split on the first letter of the C column as:
    SELECT r, low.cola, low.colb, . . ., low.colm,
    high.coln, high.colo, . . ., high.colz
    FROM (SELECT r, SUM(DECODE(c, 'A', va)) cola, . . .
    SUM(DECODE(c, 'M', va)) colm
    FROM table
    WHERE c like 'A%'
    GROUP BY r) Alpha_A,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'B%'
    GROUP BY r) Alpha_B,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'C%'
    GROUP BY r) Alpha_C
    (SELECT r, SUM(DECODE(c, 'zN', va)) coln, . . .
    SUM(DECODE(c, 'zZ', va)) colz
    FROM table
    WHERE c like 'Z%'
    GROUP BY r) Alpha_Z
    WHERE alpha_A.r = alpha_B.r and apha_a.r = alpha_C.r ... and alpha_a.r = alpha_z.r
    I will have 27 select statement joined... I have to check if even like that I will not reach the limit within one of the statement select
    "in real life"
    select GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1 from
    (select r, sum(decode(C, 'Wall, unspecified',cases)) W0 from tmp_maqueje where upper(C) like 'W%' group by r) GRPW,
    select r,
    sum(decode(C, 'Ceramic tiles, indoors',cases)) C0,
    sum(decode(C, 'Cement surface, outdoors (Concrete/cement block, see Structural element, A11)',cases)) C1
    from tmp_maqueje where upper(C) like 'C%' group by r) GRPC
    where GRPW.r = GRPC.r
    order by GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1
    Message was edited by:
    maquejp

Maybe you are looking for