Large Cluster Size

I believe by default cluster size is limited to 436 members by default. We would like to increase this to about 550 temporarily. Can someone remind me of the setting to do this? Any critical drawbacks (I know its no ideal). Thanks in advance... Andrew.

There was some changes made in in some 3.7.1.x release to support large clusters. Previous there was some issues with the preferred MTU, which was 64K causing OOM in the Publisher/Receiver.

Similar Messages

  • Maximum cluster size?

    Hi all,
    I'm using LV 7.1 and am trying to access a function in a DLL.  The function takes a pointer to a data structure that is large and complex.  I have calculated that the structure is just under 15kbytes.
    I have built the structure as a cluster and then attempted to pass the cluster into the call library function node with this parameter set as adapt to type.  I get memory error messages and when analysing the size of the cluster I am producing it appears to be much smaller than required.
    I have also tried creating an array and passing that but I think that won't work as it needs to be a fixed size of array which can only be acheived, according to what I've read, by changing it to a cluster and this is limited to a size of 256.
    Does anybody have a suggestion of how to overcome this?
    If any more detail is required then I'm happy to supply. 
    Dave.

    John.P wrote:
    Hi Dave,
    You have already received some good advice from the community but I wanted to offer my opinion.
    I am unsure as to why the cluster size will not exceed 45, my only suggestion is to try using a type cast node as this is the suggested method for converting from array to cluster with greater than 256 elements.
    If this still does not work then in this case I would recommend that you do use a wrapper DLL, it is more work but due to the complexity of the cluster you are currently trying to create I would suggest this is a far better option.
    Have a look at this KB article about wrapper DLL's.
    Hope this helps,
    John P
    John, I am having a hard time converting an array of greater than 256 elements to a cluster.  I attempted to use the type cast node you suggested and didn't have any luck.  Please see the attached files... I’m sure I’m doing something wrong.  The .txt file has a list of 320 elements.  I want to run the VI so that in the end I have a cluster containing equal number of integer indicators/elements inside.  But more importantly, I don't want to have to build a cluster of 320 elements.  I'd like to just change the number of elements in the .txt file and have the cluster automatically be populated with the correct number of elements and their values.  No more, no less.   One of the good things about the convert array to cluster was that you could tell the converter how many elements to expect and it would automatically populate the cluster with that number of elements (up to 256 elements only).  Can the type cast node do the same thing?  Do you have any advice?  I posted this question with more detail to my application at the link below... no luck so far.  
    http://forums.ni.com/ni/board/message?board.id=170​&thread.id=409766&view=by_date_ascending&page=1
    Message Edited by PhilipJoeP on 05-20-2009 06:11 PM
    Attachments:
    cluster_builder.vi ‏9 KB
    config.txt ‏1 KB

  • Array to Cluster, Automate cluster size?

    I often use the Array to Cluster VI to quickly change an array of data into a cluster of data that I can then connect to a Waveform Chart.  Sometimes the number of plots can be different which results in extra plots (full of zeros) on the graph, or missing plots if the array is larger than the current cluster size.  I know I can right-click on the node and set the cluster size (up to 256) manually.  I could also use a case structure with as many Array to Cluster nodes as I need, set them individually and wire an Array Size to the case structure selector but that's kind of a PITA. 
    My question is whether or not anyone knows a way to control the cluster size value programatically.  It seems that if I can right-click it and do it manually there must be some way to automate it but I sure can't figure it out.  It would be nice if you could simply wire your desired value right into an optional input on the node itself.  Any ideas will be much appreciated.
    Using LabVIEW: 7.1.1, 8.5.1 & 2013
    Solved!
    Go to Solution.

    I'm under the impression it's impossible.  See this idea for related discussion.
    Tim Elsey
    LabVIEW 2010, 2012
    Certified LabVIEW Architect

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • Array to cluster with adjustable cluster size

    Hi all
    Here I have  a dynamic 1D array and I need to convert it into cluster. So I use Array to Cluster function. But I notice that the cluster size is a fix value. How can I adjust the cluster size according to the 1D array size?
    Anyone pls give advise..
    Thanks....

    I won't disagree with any of the previous posters, but would point out a conversion technique I just recently tried and found to work well for my own particular purposes.  I've given the method a pretty good workout and not found any obvious flaws yet, but can't 100% guarantee the behavior in all settings.
    Anyhow, I've got a fairly good sized project that includes quite a few similar but distinct clusters of booleans.  Each has been turned into a typedef, complete with logical names for each cluster element.  For some of the data processing I do, I need to iterate over each boolean element in a cluster, do some evaluations, and generate an output boolean cluster.  I first structured the code to use the "Cluster to Array" primitive, then auto-index over the resulting array of booleans, perform the evaluations and auto-index an output array-of-booleans, then finally convert back using the "Array to Cluster" primitive.  I, too, was kinda bothered by having to hardcode cluster sizes in there...
    I found I could instead use the "Typecast" primitive to convert the output array back to my cluster.  I simply fed the input cluster into the middle terminal to defin! the datatype.  Then the output cluster is automatically the right size and right datatype.
    This still is NOT an adjustable cluster size, but it had the following benefits:
    1. If the size of my typedef'ed cluster changes during development by adding or removing boolean elements, none of the code breaks!  I don't have to go searching through my code for all the "Array to Cluster" primitives, identifying the ones I need to inspect, and then manually changing the cluster size on them one at a time!
    2. Some of my processing functions were quite similar to one another.  This method allowed me to largely reuse code.  I merely had to replace the input and output clusters with the appropriate new typedef.  Again, no hardcoded cluster sizes hidden in "Array to Cluster" primitives, and no broken code.
    Dunno if your situation is similar, but it gave me something similar to auto-sizing at programming time.  (You should test the behavior when you feed arrays of the wrong size into the "Typecast" primitive.  It worked for my app's needs, but you should make sure it's right for yours.)
    -Kevin P.

  • Cluster Size issues

    Hi,
    I am running SQL Server 2008 R2 on Windows Server 2008 R2. My databases are residing on a RAID5 configuration.
    Recently I have had to replace one of the HDDs in the RAID with a different HDD. The result is that I now have 2 HDD with a physical and logical cluster size of 512 Bytes and 1 with 3072 Bytes physical and 512 Logical.
    Since the rebuild, the databases and SQL have been fine. I could read and write to and from the databases and Backups had no issues either. Today however (2 months down the line of the RAID rebuild) I could no longer access the databases and backups did
    not work either. I kept getting this error when trying to detach the database or backing it up:
    TITLE: Microsoft SQL Server Management Studio
    Alter failed for Database 'dbname'.  (Microsoft.SqlServer.Smo)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    Cannot use file 'D:\dbname.MDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
    Cannot use file 'D:\dblogname_1.LDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
    Database 'dbname' cannot be opened due to inaccessible files or insufficient memory or disk space.  See the SQL Server errorlog for details.
    ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5178)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500&EvtSrc=MSSQLServer&EvtID=5178&LinkId=20476
    BUTTONS:
    OK
    My Temporary solution to this was to move the DB to my C: drive and attach it from there. This is not ideal as I am losing the redundancy of the RAID. 
    Can anybody tell me if it is because of the hard drive with the larger sector size? (This is the only logical explanation i have) And why would it only happen now? 
    I am sorry if this is the wrong Forum for this question

    Apparently it was not until recently that the database spilled over to that new disk. No, I don't too know much about RAIDs.
    But it seems obvious that you need to make sure that all disks in the RAID have the same sector size.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • How do I set up iPhoto '11 version 9.2.1 to send emails with medium/Large/actual size? I try sending photos to a publisher and iPhoto automatically shrinks them to about 40 kB.  Publisher needs between 0.5 and 7 MB.  My ISP has a limit of 12MB

    How can I get to email photos at higher resolution?
    My "iPhoto" or "Mail" automatically reduces photos to about 40 KB .
    My photos, taken with a  common Panasonic Lumix DMC-TZ20 include flora which I send to a publisher.
    I am fairly new to my MacBook (Mac S X Version 10.6.8).  I have been using it for a year.
    I found I could not successfully send Large or Full size photos.
    A few days ago I downloaded a new version of iPhoto (iPhoto '11 version 9.2.1).
    Again, although I get the option of choosing to send small/medium/large//full size photos, it actually only sends them as "Small" (ie about 40kb) which is useless to publish.  If I drag a picture onto the desktop (where my EXIF tells me it is full size) then attach it to an email by dragging, againg the sent email is at "small" size ~40KB).
    My ISP (Westnet) tells me that their maximum accepted size is 12 MB.
    I suspect that I have failed to set up iPhoto for this purpose.  (I am well into retirement but, although geriatric, did study University Physics about half a century ago)

    Also, just a note.   iMac PPC is about iMacs that were available up to early 2006.    iMac Intel is for iMacs that were made available early in to 2006 to present.  Forums for what you have in your profile include:
    https://discussions.apple.com/community/mac_os/mac_os_x_v10.6_snow_leopard
    https://discussions.apple.com/community/ilife/iphoto
    https://discussions.apple.com/community/notebooks/macbook
    Please consider posting and reviewing those locations in the future when you have a question that pertains to one of those products.  You'll much more likely get a resposne then.   This is a user to user forum, and you have to seek knowledge in the areas that are most likely to have the topic of your interest.

  • How do I retrieve binary cluster data from a file without the presense of the cluster size in the data?

    Hey guys,  I'm trying to read a binary data file created by a C++ program that didn't append sizes to the structures that were used when writing out the data.  I know the format of the structures and have created a cluster typedef in LabView.  However the unflatten from string function expects to see additional bytes of data identifying the size of the cluster in the file.   This just plain bites!  I need to retrieve this data and have it formatted correctly without doing it manually for each and every single element.  Please Help!
    Message Edited by AndyP123 on 06-04-2008 11:42 AM

    Small update.  I have fixed size arrays in the clusters of data in the file and I have been using arrays in my typedefs in LabView and just defining x number of indexes in the arrays and setting them as the default value under Data Operations.  LabView may maintain the default values, but it still treats an array as an unknown size data type.  This is what causes LabView to expect the cluster size to be appended to the file contents during an unflatten.  I can circumvent this in the most simplest of cases by using clusters of the same type of data in LabView to represent a fixed size array in the file.  However, I can't go around using clusters of data to represent fixed size arrays BECAUSE I have several multi-dimentional arrays of data in the file.  To represent that as a cluster I would have to add a single value for every element to such a cluster and make sure they are lined up sequentially according to every dimension of the array.  That gets mighty hairy, mighty fast. 
    EDIT:  Didn't see that other reply before I went and slapped this in here.  I'll try that trick and let you know how it works.......
    Message Edited by AndyP123 on 06-04-2008 12:11 PM

  • Large page sizes on Solaris 9

    I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
    # uname -a
    SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
    I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
    and
    "Taming Your Emu to Improve Application Performance (February 2004)"
    http://www.sun.com/blueprints/0204/817-5489.pdf
    The machine claims it supports 4M page sizes:
    # pagesize -a
    8192
    65536
    524288
    4194304
    I've written a very simple program:
    main()
    int sz = 10*1024*1024;
    int x = (int)malloc(sz);
    print_info((void**)&x, 1);
    while (1) {
    int i = 0;
    while (i < (sz/sizeof(int))) {
    x[i++]++;
    I run it specifying a 4M heap size:
    # ppgsz -o heap=4M ./malloc_and_sleep
    address 0x21260 is backed by physical page 0x300f5260 of size 8192
    pmap also shows it has an 8K page:
    pmap -sx `pgrep malloc` | more
    10394: ./malloc_and_sleep
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- malloc_and_sleep
    00020000 8 8 8 - 8K rwx-- malloc_and_sleep
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 6288 6288 6288 - 8K rwx-- [ heap ]
    (The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
    I'm running this as root.
    In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
    Here's the output from sysinfo:
    General Information
    Host Name is machinename.lucent.com
    Host Aliases is loghost
    Host Address(es) is xxxxxxxx
    Host ID is xxxxxxxxx
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Manufacturer is Sun (Sun Microsystems)
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    System Model is Blade 1000
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    ROM Version is OBP 4.10.11 2003/09/25 11:53
    Number of CPUs is 2
    CPU Type is sparc
    App Architecture is sparc
    Kernel Architecture is sun4u
    OS Name is SunOS
    OS Version is 5.9
    Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Kernel Information
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    SysConf Information
    Max combined size of argv[] and envp[] is 1048320
    Max processes allowed to any UID is 29995
    Clock ticks per second is 100
    Max simultaneous groups per user is 16
    Max open files per process is 256
    System memory page size is 8192
    Job control supported is TRUE
    Savid ids (seteuid()) supported is TRUE
    Version of POSIX.1 standard supported is 199506
    Version of the X/Open standard supported is 3
    Max log name is 8
    Max password length is 8
    Number of processors (CPUs) configured is 2
    Number of processors (CPUs) online is 2
    Total number of pages of physical memory is 262144
    Number of pages of physical memory not currently in use is 4368
    Max number of I/O operations in single list I/O call is 4096
    Max amount a process can decrease its async I/O priority level is 0
    Max number of timer expiration overruns is 2147483647
    Max number of open message queue descriptors per process is 32
    Max number of message priorities supported is 32
    Max number of realtime signals is 8
    Max number of semaphores per process is 2147483647
    Max value a semaphore may have is 2147483647
    Max number of queued signals per process is 32
    Max number of timers per process is 32
    Supports asyncronous I/O is TRUE
    Supports File Synchronization is TRUE
    Supports memory mapped files is TRUE
    Supports process memory locking is TRUE
    Supports range memory locking is TRUE
    Supports memory protection is TRUE
    Supports message passing is TRUE
    Supports process scheduling is TRUE
    Supports realtime signals is TRUE
    Supports semaphores is TRUE
    Supports shared memory objects is TRUE
    Supports syncronized I/O is TRUE
    Supports timers is TRUE
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Device Information
    SUNW,Sun-Blade-1000
    cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    Does anyone have any idea as to what the problem might be?
    Thanks in advance.
    Mike

    I ran your program on Solaris 10 (yet to be released) and it works.
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- mm
    00020000 8 8 8 - 8K rwx-- mm
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 8192 8192 8192 - 4M rwx-- [ heap ]
    I think you don't this patch for Solaris 9
    i386 114433-03
    sparc 113471-04
    Let me know if you encounter problem even after installing this patch.
    Saurabh Mishra

  • Creating .pdf from a ppt or .doc in acrobat 9 resulting in strangely large file sizes

    I am printing to .pdf from Microsoft Word and Powerpoint (previously Office 07 - now Office 10) and the file sizes are sometimes doubling in size. Also, for some larger documents, multiple .pdfs are generating as if acrobat is choosing where to cut off a document due to large file size and generating secondary, tertiary, etc., docs to continue the .pdf. The documents have placed images (excel charts, etc) but they are not enormous - that is until they are being generated into a .pdf. I had hoped that when i upgraded to Office 10, this issue would go away, but unfortunately, it hasn't. Any ideas? thoughts? Thanks so much!

    One thing that can happen with MS Office applications when an image contains transparency is the "image" is actually composed of multiple one pixel images when converted to PDF. This could be confirmed by examining the file. Try selecting the images using the TouchUp Object tool to see if what looks like a single image it entirely selected.

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Photo Gallery Module - Customize large image size

    I need to figure out how to customize the size of the full image on the Photo Gallery Module. I have a couple of clients using this module that do not understand how to crop and resize images before uploading them.
    I understand this can be done through CSS, but the issue I foresee is that the original image is still used which could have a large file size. Is there a workaround for resizing the full size image and also reducing the actual file size? Either through CSS, JS, or maybe even the new BC Liquid engine? Any help would be appreciated. Thanks.

    You can go to the file manager and use the resize option on images in there.
    BUT
    They need to do this outside of such systems, BC will only edit based on numbers and math, if the client gets the wrong proportional values it will screw up the images, the system will not crop the images, only scale them so things can turn out squashed etc.
    If images are being used on the web as an indicative representation of something for a site, they need to be done properly and out of the system before they come in.

  • Large Canvas Sizes

    I recently purchased a product called Roots Magic for an upcoming family re-union (Sep 13th). This program produces a Family Tree Wall Chart with a canvas size of 10.6" x 396.9" (1' x 33'), and growing. I have pictures within the chart so you can put a face to the name. I am trying to get some format that a local printer can produce a reasonably readable chart. The Roots Magic generated chart is great in format, quality and size of lettering.
    I can only export it as a .EMF file as the other options cause RootsMagic to crash.
    I can't use Adobe Illustrator, Design, or PDF (which are the preferred formats of the local printers) because their largest canvas size is 227". I tried importing partial EMF files, however, their print size was too small and I couldn't scale the objects without blowing the canvas limit again. Besides, genrating the tree and printing it in pieces, seems to alter the flow of the generations.
    Adobe Photoshop has a larger canvas size, I think, however, it does not import EMF files.
    I have most of the Adobe products, but I can't seem to find a way to produce something for local commercial printing. Muliple PDF pages, scaled to the same size, appears to be an alternative but I can't seem to find a way to produce this. RootsMagic does not have a lot of alternatives.
    My options seem to be:
    a) find some way to breadown the resulting object intio mutiple canvas sizes
    b) buy my own printer/plotter.
    To make matters worse, the chart is growing daily as I receive added updates.
    Does anyone have any suggestions?

    Ian,
    Realistically, the folks who come to mind who can output anything 33-feet long are the folks who would make large banner or perhaps some billboard and vehicle graphics stuff, and resolution is probably not going to be satisfactory. Also note, that a Dead Seas Scroll is awkward to handle unless you have a wall to unroll and tack it to. Also note that the graphic file size could exceed 1 GB.
    >buy my own printer/plotter.
    Forget about purchasing a printer that can handle this unless multiple pieces are acceptable. For example, an Epson 13" printer (figure spending several hundred dollars) can handle roll-cut printouts, but only up to 44" wide. But if you have to print out more than one or two sets, you're going to be spending a lot of time with this; and a lot of money on ink (and paper if you choose something better than the rather miserable, non-stable 28 lb or 32 lb inkjet bond). But then again, if you have a production house or printer output this for you, you may be spending $10-$12 or more per square foot.
    >I can only export it as a .EMF file as the other options cause RootsMagic to crash.
    Get tech support to figure it out, so that if multiple pages is a reasonable option, you can chop the art down and place each segment on a page of a page layout document, such as InDesign or QuarkXPress, if you can save to a compatible format, such as .pdf, .eps .tif or .jpg.
    Neil

  • Web Flash Gallery: Problem setting Large Images size

    When using the Lightroom Flash Gallery I am not able to change the size of the Large Images (Web | Appearance | Large Images | Size). When I set the size to Medium or Small, the large images always ends up with the same size as when set to Large. Only when setting the size to Extra Large there is a slight change in size.
    How can I change the Large Images size to Medium and Small?
    Jørgen Johanson

    jools2005 wrote:
    Adobe please address this issue.
    This is a User to User forum not official Adobe support. Yes, Adobe staff may participate from time to time, but not often enough that you can depend on them reading your post. Therefore, I suggest that you support a bug report using the facilities provided at https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform

  • New FAQ Entry on JVM Parameters for Large Cache Sizes

    I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
    What JVM parameters should I consider when tuning an application with a large cache size?
    If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
    Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
    Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
    You may also want to refer to the following articles:
    * Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
    * The most complete list of -XX options for Java 6 JVM
    * My Favorite Hotspot JVM Flags
    Edited by: Charles Lamb on Oct 22, 2009 9:13 AM

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

Maybe you are looking for

  • How to close the detach popup on click of a button in the panel collection

    Hi, I'm using Jdeveloper 11.1.2.3.0. I have an af table surrounded by panel collection. There is a button on panel collection to commit the changes in the table. Clicked detach button to view the table in full browser. On click of Commit button, the

  • SQL Developer Data Modeler suggestions

    Not sure where to post this (unable to connect to the blog from work due to network restrictions), hope this is the correct forum. I am testing out the Data Modeler product. I've got a large database that I want to reverse engineer, then work with th

  • L9 Crashes on start-up

    Repaired permission , brand new install , L7 & L8 and L9 coexist on the same machine. TX JP Process: Logic Pro [465] Path: /Applications/Logic Pro.app/Contents/MacOS/Logic Pro Identifier: com.apple.logic.pro Version: ??? (???) Build Info: Logic-16601

  • I am getting an error encountered problem with the display driver for my Photoshop CS5 extended

    I have Windows 7 and have used CS5 extended with no problems for quite a while now. Last week I started up CS5 and received the error: "Photoshop has encountered a problem with the display driver and has temporarily disabled GPU enhancements. Check t

  • Post referesh activity..

    Hi All, I need to perform System refresh on BI quality system can anybody provide be explained document on pre and post refresh activities . thanks kamal....