Swapping performance, and large pages

I am trying to run glpsol (the standalone solver from the GNU Linear Programming Kit) on a very large model. I don't have enough physical memory to fit the entire model, so I configured a lot of swap. glpsol, unfortunately, uses more memory to parse and preprocess the model than it does to actually run the core solver, so my approximately 2-3GB model requires 11GB of memory to get started. (However, much of this acess is sequential.)
What I am encountering is that my new machine, running Solaris 10 (11/06) on a dual-core Athlon (64-bit, naturally) with 2GB or memory, is starting up much, much more slowly than my old desktop machine, running Linux (2.6.3) on a single-core Athlon 64 with 1GB of memory. Both machines are using identical SATA drives for swap, though with different motherboard controllers. The Linux machine gets started in about three hours, while Solaris takes 9 hours or more.
So, here's what I've found out so far, and tried.
On Solaris, swapping takes place 1 page (4KB) at a time. You can see from this example iostat output that I'm getting about 6-7ms latency from the disk but that each of the reads is just 4KB. (629KB/s / 157 read/s = 4KB/read )
device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
cmdk0      157.2   14.0  628.8  784.0  0.1  1.0    6.6   2  99
cmdk1        0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd0          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0Linux has a feature called page clustering which swaps in multiple 4KB pages at once--- currently set to 8 pages (32KB).
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
hda            1270.06     2.99  184.23    6.39 11635.93    76.65    61.45     1.50    7.74   5.21  99.28
hdc               0.00     0.00    0.40    0.20     4.79     1.60    10.67     0.00    0.00   0.00   0.00
md0               0.00     0.00    1.00    0.00    11.18     0.00    11.20     0.00    0.00   0.00   0.00
hdg               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00(11636 sectors/sec = 5818KB/sec. Divided by 184 reads/sec gives just under 32KB.)
I didn't find anything I could tune in the Solaris kernel that would increase the granularity at which pages are swapped to disk.
I did find that Solaris supports large pages (2MB on x64, verified with "pagesize -a"), so I modified glpsol to use larger chunks (16MB) for its custom allocator and used memalign to allocate these chunks at 2MB boundaries. Then I rebooted the system and ran glpsol with
ppgsz -o heap=2MB glpsol ...I verified with pmap -s that 2MB pages were being used, but only a very few of them.
8148:   glpsol --cpxlp 3cljf-5.cplex --output solution-5 --log log-5
         Address       Bytes Pgsz Mode   Mapped File
0000000000400000        116K    - r-x--  /usr/local/bin/glpsol
000000000041D000          4K   4K r-x--  /usr/local/bin/glpsol
000000000041E000        432K    - r-x--  /usr/local/bin/glpsol
0000000000499000          4K    - rw---  /usr/local/bin/glpsol
0000000000800000      25556K    - rw---    [ heap ]
00000000020F5000        944K   4K rw---    [ heap ]
00000000021E1000          4K    - rw---    [ heap ]
00000000021E2000         68K   4K rw---    [ heap ]
00000000021F3000          4K    - rw---    [ heap ]
00000000087C3000          4K   4K rw---    [ heap ]
00000000087C4000       2288K    - rw---    [ heap ]
0000000008A00000       2048K   2M rw---    [ heap ]
0000000008C00000       2876K    - rw---    [ heap ]
0000000008ECF000        480K   4K rw---    [ heap ]
0000000008F47000          4K    - rw---    [ heap ]
000000003F4E8000          4K   4K rw---    [ heap ]
000000003F4E9000       5152K    - rw---    [ heap ]
000000003F9F1000         60K   4K rw---    [ heap ]
000000003FA00000       2048K   2M rw---    [ heap ]
000000003FC00000       6360K    - rw---    [ heap ]
0000000040236000        368K   4K rw---    [ heap ]
etc.There are only 19 large pages listed (a total of 38MB of physical memory.)
I think my next step, if I don't receive any advice, is to try to preallocate the entire region of memory which stores (most of) the model as a single allocation. But I'd appreciate any insight as to how to get better performance, without a complete rewrite of the GLPK library.
1. When using large pages, is the entire 2MB page swapped out at once? Or is the 'large page' only used for mapping in the TLB? The documentation I read on swap/paging and on large pages didn't really explain the interaction. (I wrote a dtrace script which logs which pages get swapped into glpsol but I haven't tried using it to see if any 2MB pages are swapped in yet.)
2. If so, how can I increase the amount of memory that is mapped using large pages? Is there a command I can run that will tell me how many large pages are available? (Could I boot the kernel in a mode which uses 2MB pages only, and no 4KB pages?)
3. Is there anything I should do to increase the performance of swap? Can I give a hint to the kernel that it should assume sequential access? (Would "madvise" help in this case? The disk appears to be 100% active so I don't think adding more requests for 4KB pages is the answer--- I want to do more efficient disk access by loading bigger chunks of data.)

I suggest posting this to the opensolaris performance discussion group. Also it would be useful to know if the application is 32 bit or 64 binary.

Similar Messages

  • Investigate book with both landscape and portrait pages

    I've an old book where I sucessfully swapped landscape and portrait pages, but there are complications why I cannot use it as a model for my current project. Since I no longer remember how I accomplished this, I have to investigate how to build a book with both landscape and portrait pages.
    I know about paragraph tags set to use a new page. Also a table on Reference page which lists Paragraph Tag Name, Right-Handed Master Page, Left-Handed Master Page, Range Indicator.
    But this doesn't seem to be enough; something is not working.
    Could you please point me toward resources which discuss how to do this?
    Thank you kindly,
    Theresa

    I think there was a big discussion of this already either here on the forum or over at the Frameusers.com list (I forget which). Try googling -

  • Optimize performance of large library and masters, and hardware?

    I followed some very useful advice here during the transition from Aperture 2 to 3, about a year ago.  It dealt with keeping things running efficiently, when managing 100,000+ photos in your Aperture library.  Defragmentation, eSata drives, referenced files, etc.
    The best information came from a "Kevin J. Doyle".  I'd be delighted to hear from him how he currently has things set up, including hardware.  (Like me, he is a registered user, but I don't see any way to contact him directly via e-mail, and don't see any web page for him elsewhere.  So this is a message in a bottle. . . .)
    I'm currently wrestling with inadequate hardware (iMac 7,1, with 6GB Ram, and multiple hard drives via fw800), and suffering long hours doing basic housekeeping just to keep things barely adequate.  For example, after I've done a lot of overnight copying and repairing of things, I still have to wait more than ten seconds for an individual image to load on screen before I can edit it.  And if I use brushes, the resulting "processing" can sometimes take up to a minute before I see the particular effect of my brushing.  I waste a lot of time.  (I spend a lot of time keeping customers updated on my slow progress with their pictures too, hoping they don't get too frustrated.)
    I know it's a common observation, that computer hardware and software must be routinely maintained (usually via other sofware, eg. Cocktail, Disk Warrior, etc.).  I think I'm beyond the point I can eek out better performance from this generation of iMac.  I'm thinking about the Mac Pro next.  I'm not too concerned for now about the rumored end of Apple development of the Mac Pro.  I probably should be, but I'm sure that anything I get hold of in the Mac Pro line from the last two or three generations would be faster and easier to use (organizing multiple hard drives, in particular).
    I'm also keen to know how to organize my library better.  I have 250,000 and counting photos, referenced.  2TB and more of masters.  I work with thumbnails, but without previews (because they made the library too large to copy in a reasonable time, i.e. overnight via fw800, and did not seem to speed up the editing in any way I could measure).  Currently everything is in one library.  I've tried making a small "work-in-progress" separate library, thinking that might speed things up.  It made no difference.  Tried the same with managed versus referenced.  No difference.  Anything else I could try?
    My masters are located on a very fast (raid-5) and large (6TB) disc, accessed by the library via fw800, because that's all I can get from the iMac.  FWIW, the location of the masters does not seem to have anything to do with the editing performance slowness.  I and others here ran those tests a while ago, when I switched from a managed to a referenced library.

    It is time to move to more modern Sandy Bridge hardware. New Mac Pros with the latest graphics support will almost assuredly be available soon. I suggest waiting to see the choices/prices and then moving to more adequate Thunderbolt-based hardware, very much preferably Mac Pro rather than iMac if the new MP pricing is at all civilized.
    Top iMacs obviously have cpu speed (for those who can tolerate the glossy display) but for heavy images work a true tower with a top graphics card has the appropriate beef to best perform the tasks you are presenting.
    Sandy Bridge Xeon cpus are available Q1 but there is however some chance that Apple might delay the MP upgrade until the Ivy Bridge Xeon cpus in the April/May time frame.
    HTH
    -Allen

  • Can large page improve timesten performance on aix??

    hello, chris:
    Can large page imporove timesten performance on aix? we havenot test it yet, I just want to know whether it is ok?? thank you.

    Enabling large page support on AIX may help performance under some circumstances (typically with large datastores). TimesTen does support use of large pages on AIX and no special action is needed on the TimesTen side to utilise large pages. Here is some information on this which should eventually appear in the TimesTen documentation.
    The TimesTen shared memory segment for AIX has been created with the flags
    ( SHM_PIN | SHM_LGPAGE) necessary for large page support for many releases.
    It used to be the case that special kernel flags needed to be enabled so it
    was never documented as being supported. Subsequent AIX releases have made
    enabling large pages a matter of system administration.
    1) Enable capabilities (chuser)
    2) Configure page pool (vmo -r)
    3) Enable memory pining (vmo -p)
    4) [restart timesten daemons]
    The documentation is quoted below.
    AIX maintains separate 4 KB and 16 MB physical memory pools. You can
    specify the amount of physical memory in the 16 MB memory pool using the vmo
    command. Starting with AIX 5.3, the large page pool is dynamic, so the amount
    of physical memory that you specify takes effect immediately and does not
    require a system reboot. The remaining physical memory backs the 4 KB virtual
    pages.
    AIX treats large pages as pinned memory. AIX does not provide paging
    support for large pages. The data of an application that is backed by large
    pages remains in physical memory until the application completes. A security
    access control mechanism prevents unauthorized applications from using large
    pages or large page physical memory. The security access control mechanism
    also prevents unauthorized users from using large pages for their
    applications. For non-root user ids, you must enable the CAP_BYPASS_RAC_VMM
    capability with the chuser command in order to use large pages. The following
    example demonstrates how to grant the CAP_BYPASS_RAC_VMM capability as the
    superuser:
    # chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE <user id>
    The system default is to not have any memory allocated to the large page
    physical memory pool. You can use the vmo command to configure the size of
    the large page physical memory pool. The following example allocates 4 GB to
    the large page physical memory pool:
    # vmo -r -o lgpg_regions=64 -o lgpg_size=16777216
    To use large pages for shared memory, you must enable the SHM_PIN shmget()
    system call with the following command, which persists across system reboots:
    # vmo -p -o v_pinshm=1
    To see how many large pages are in use on your system, use the vmstat -l
    command as in the following example:
    # vmstat -l
    kthr memory page faults cpu large-page
    r b avm fre re pi po fr sr cy in sy cs us sy id wa alp flp
    2 1 52238 124523 0 0 0 0 0 0 142 41 73 0 3 97 0 16 16
    From the above example, you can see that there are 16 active large pages, alp, and
    16 free large pages, flp.
    Documentation is at:
    http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibmaix.prftungd/doc/prftungd/large_page_ovw.htm
    Chris

  • You don't have Add and Customize Pages permissions required to perform this action - but i do!

    Hello.
    I am having a similar problem to the one described by this post here:
    http://kiruthik.com/2010/10/05/YouDontHaveAddAndCustomizePagesPermissionsRequiredToPerformThisAction.aspx
    However, his resolution of changing web.config under wwwroot did not fix my issue.
    Here is the situation on a recently upgraded SP2010.
    1) Create a document library. Set the document template to "basic page"
    2) give a user full control on the document library.
    3) Go, under that users credentials, to the document library and create a new document.
    4) Name the document whatever and hit create
    5) a page is displayed which says the following: "Web Part Error:
    A Web Part or Web Form Control on this Page cannot be displayed or imported. You don't have Add and Customize Pages permissions required to perform this action."
    6) Become confused as giving someone "full control" on the library should obviously inherit the permissions to when they create a document and try and edit it.
    If i give the user "full control" on the entire site (site actions -> site permissions), then the document pops open an editor which would allow you to add content to the basic page. Obviously, i do not want to give users full control to the
    entire site :)
    Giving the user  "Add and Customize Pages" permissions on the entire SITE allows them to edit get the popup to edit the file, but this is not desireable either since then they could "Add, change, or delete HTML pages or Web Part pages,
    and edit the Web site using a Windows SharePoint Services-compatible editor.". I do not want them to edit any web pages, i just want them to edit a particular lists webpage!
    Things i have tried:
    - Set the permissions on the "MSContentEditor" webpart to have the user have full control of the web part.
    - set the permissions on ALL the web parts to have the user have full control (permissions on the web parts list)
    - Edited web.config to add a line to mark "contentEditorWebPart" as safe, and then restarted IIS.
    - Recreated test lists and was able to duplicate the behaviour
    So I am wondering, is this a sharepoint bug or what? Should a web part not inherit the permissions of the list it is working in? if not, what are the exact items i need to set permissions on to get this webpart to display correctly?
    Setting global site permissions is not an option.
    Another workaround is to make a "wiki page library" which allows you to copy and paste text into a static page, and also edit that text. I can move all the content over to a "wiki page library", and thats what i probably will end up doing
    in the end, but its annoying because there is alot of content to move, and it all has to be done by hand as you cannot upload documents to a wiki afaik. Also the wiki uses a different editor, its inline instead of a popup in IE which is probably why it is
    working.
    This may not be a bug and could be a simple permissions problem or something else, but i have been looking at this for hours and it doesnt make any sense to me.
    Any help appreciated thanks!

    Hi,
    Yes, you must have “Add and Customize Pages” permission
    from site level to perform this action, the permission is not in list permission level.
    You can add a new permission level which only includes “Add and Customize Pages” permission, and then create new SharePoint group with this permission
    level.
    Add the users into the SharePoint group and these users will get the “Add and Customize Pages” permission
    from site level.(site permissions)
    Additionally to add/edit page, the users also need the permission level “Contributor” in list permission level.
    (you should know when you grant the full control to the users in list permission level, the users won’t get the permission from site level)
    If you need further help, please let me know.
    Hope this helps
    Thanks!
    Stanfford
    Everything will be fine.

  • Performance and memory problems

    Hello.
    I've installed Oracle AS 10g in Redhat Linux EL 3. The machine has 3 GB of memory, 2 GB of swap. Its a dual Xeon 2.6 Ghz.
    Imagine i reboot the machine. Everything works fine. The problem is, in 6 days, it fills all the swap memory and almost the memory available. If it reaches this values, the machine gets so slow... To correct temporary the situation, i need to perform a reboot. But the question is this is Linux, not windows...
    Does anyone has an ideia why this is happening?? i've never seen something like this... after 6 days, the only memory available of 5 GB are 30 MB... this is not possible.
    Help please !

    We are with the same problem. We read a lot about the vm in the RHAS 3.0. The vm was improved but, with bugs. We found a lot of inactive pages that, after the clean state, does'nt deallocate properly. This makes the thread kernel "kswapd" works a lot, taking all the cpu in the io process.
    There is a known bug related to "kswapd" thread in the RHAS 3.0 kernel. The RedHat promess to solve the problem in the nexts releases (I really doubt it, because there was already 3 releases after its announces).
    We have checked some strange behaviors too... The machines with Linux RH AS 3.0 + Oracle products (OracleDB and OracleIAS) are freezing randomically. We think that the shared memory are allocating kernel space pages, but we didn't found any evidences yet.
    The workarounds (not tested yet):
    - Upgrade to the latest kernel.
    or
    - Use the hugemem kernel (even when using less than 16GB of RAM, some guys reported that this solve the problem)
    or
    - Compiling a clean kernel directly from kernel.org. In this case, we don't have support from Oracle, but maybe the problem could be resolved until RH publish some bugless vm code with your kernel.

  • Large page sizes on Solaris 9

    I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
    # uname -a
    SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
    I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
    and
    "Taming Your Emu to Improve Application Performance (February 2004)"
    http://www.sun.com/blueprints/0204/817-5489.pdf
    The machine claims it supports 4M page sizes:
    # pagesize -a
    8192
    65536
    524288
    4194304
    I've written a very simple program:
    main()
    int sz = 10*1024*1024;
    int x = (int)malloc(sz);
    print_info((void**)&x, 1);
    while (1) {
    int i = 0;
    while (i < (sz/sizeof(int))) {
    x[i++]++;
    I run it specifying a 4M heap size:
    # ppgsz -o heap=4M ./malloc_and_sleep
    address 0x21260 is backed by physical page 0x300f5260 of size 8192
    pmap also shows it has an 8K page:
    pmap -sx `pgrep malloc` | more
    10394: ./malloc_and_sleep
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- malloc_and_sleep
    00020000 8 8 8 - 8K rwx-- malloc_and_sleep
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 6288 6288 6288 - 8K rwx-- [ heap ]
    (The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
    I'm running this as root.
    In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
    Here's the output from sysinfo:
    General Information
    Host Name is machinename.lucent.com
    Host Aliases is loghost
    Host Address(es) is xxxxxxxx
    Host ID is xxxxxxxxx
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Manufacturer is Sun (Sun Microsystems)
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    System Model is Blade 1000
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    ROM Version is OBP 4.10.11 2003/09/25 11:53
    Number of CPUs is 2
    CPU Type is sparc
    App Architecture is sparc
    Kernel Architecture is sun4u
    OS Name is SunOS
    OS Version is 5.9
    Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Kernel Information
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    SysConf Information
    Max combined size of argv[] and envp[] is 1048320
    Max processes allowed to any UID is 29995
    Clock ticks per second is 100
    Max simultaneous groups per user is 16
    Max open files per process is 256
    System memory page size is 8192
    Job control supported is TRUE
    Savid ids (seteuid()) supported is TRUE
    Version of POSIX.1 standard supported is 199506
    Version of the X/Open standard supported is 3
    Max log name is 8
    Max password length is 8
    Number of processors (CPUs) configured is 2
    Number of processors (CPUs) online is 2
    Total number of pages of physical memory is 262144
    Number of pages of physical memory not currently in use is 4368
    Max number of I/O operations in single list I/O call is 4096
    Max amount a process can decrease its async I/O priority level is 0
    Max number of timer expiration overruns is 2147483647
    Max number of open message queue descriptors per process is 32
    Max number of message priorities supported is 32
    Max number of realtime signals is 8
    Max number of semaphores per process is 2147483647
    Max value a semaphore may have is 2147483647
    Max number of queued signals per process is 32
    Max number of timers per process is 32
    Supports asyncronous I/O is TRUE
    Supports File Synchronization is TRUE
    Supports memory mapped files is TRUE
    Supports process memory locking is TRUE
    Supports range memory locking is TRUE
    Supports memory protection is TRUE
    Supports message passing is TRUE
    Supports process scheduling is TRUE
    Supports realtime signals is TRUE
    Supports semaphores is TRUE
    Supports shared memory objects is TRUE
    Supports syncronized I/O is TRUE
    Supports timers is TRUE
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Device Information
    SUNW,Sun-Blade-1000
    cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    Does anyone have any idea as to what the problem might be?
    Thanks in advance.
    Mike

    I ran your program on Solaris 10 (yet to be released) and it works.
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- mm
    00020000 8 8 8 - 8K rwx-- mm
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 8192 8192 8192 - 4M rwx-- [ heap ]
    I think you don't this patch for Solaris 9
    i386 114433-03
    sparc 113471-04
    Let me know if you encounter problem even after installing this patch.
    Saurabh Mishra

  • Total Shared Global Region in Large Pages = 0 KB (0%)

    Hi ,
    Am working on Oracle Database 11.2.0.3 ,
    Application 12.1.3.
    i see this message in the alert log file :
    ****************** Large Pages Information *****************
    Total Shared Global Region in Large Pages = 0 KB (0%)
    Large Pages used by this instance: 0 (0 KB)
    Large Pages unused system wide = 0 (0 KB) (alloc incr 32 MB)
    Large Pages configured system wide = 0 (0 KB)
    Large Page size = 2048 KB
    RECOMMENDATION:
    Total Shared Global Region size is 12 GB. For optimal performance,
    prior to the next instance restart increase the number
    of unused Large Pages by atleast 6145 2048 KB Large Pages (12 GB)
    system wide to get 100% of the Shared
    Global Region allocated with Large pages
    What should i do ?
    Thanks

    You definitely are not using hugepagesd. That's what the message you mentioned above is telling you:
    Total Shared Global Region in Large Pages = 0 KB (0%)It very clearly tells you that you have 0KB or 0% is in large pages.
    Note that the terms "large pages" and "hugepages" are synonymous. In Linux, they're called hugepages.
    Also, at the O/S level, you can do:
    cat /proc/meminfoTo see how many hugepages are allocated/free/reserved.
    Hope that helps,
    -Mark

  • Windows 7 - Large Pages

    While I was performing some benchmarks on my W520, I became aware that there is a function in Windows 7 called Large Pages. Essentially setting this policy for either a single user or a group greatly reduces the TLB overhead when translating memory addresses for applications in storage. The normal page size is 4KB. Large Pages sets the page size to be 2MB. The smaller number was useful when there was only a relatively small physical memory space available in the system (Windows 95, etc). However, as the addressable physical page space becomes larger, the overhead for translating addresses across page boundaries starts to be significant. Linux has an equivalent function.
    Here's a screenshot of where the setting (Lock pages in memory) is located:
     <----------------
    The memory bandwidth benchmark using SiSoftware Sandra 2012 showed a performance increase of 2.04% for normal operations  and a 2.9% increase for floating point operations. This was with only one user enabled. Enabling all users in the system brought an additional .5% performance increase. PCMARK7 also showed a corresponding increase in benchmark performance numbers.
    Thanks to Huberth for pointing me into the SiSoftware Sandra 2012 benchmarking software and the memory bandwidth warning.
    This is an extract from a memory bandwidth benchmark run:
    Integer Memory Bandwidth
    Assignment : 16.91GB/s
    Scaling : 17GB/s
    Addition : 16.75GB/s
    Triad : 16.72GB/s
    Data Item Size : 16bytes
    Buffering Used : Yes
    Offset Displacement : Yes
    Bandwidth Efficiency : 80.36%
    Float Memory Bandwidth
    Assignment : 16.91GB/s
    Scaling : 17GB/s
    Addition : 16.73GB/s
    Triad : 16.74GB/s
    Data Item Size : 16bytes
    Buffering Used : Yes
    Offset Displacement : Yes
    Bandwidth Efficiency : 80.34%
    Benchmark Status
    Result ID : Intel Core (Sandy Bridge) Mobile DRAM Controller (Integrated Graphics); 2x 16GB Crucial CT102464BF1339M16 DDR3 SO-DIMM (1.33GHz 128-bit) PC3-10700 (9-9-9-24 4-33-10-5)
    Computer : Lenovo 4270CTO ThinkPad W520
    Platform Compliance : x64
    Total Memory : 31.89GB
    Memory Used by Test : 16GB
    No. Threads : 4
    Processor Affinity : U0-C0T0 U2-C1T0 U4-C2T0 U6-C3T0
    System Timer : 2.24MHz
    Page Size : 2MB
    W520, i7-2820QM, BIOS 1.42, 1920x1080 FHD, 32 GB RAM, 2000M NVIDIA GPU, Samsung 850 Pro 1TB SSD, Crucial M550 mSata 512GB, WD 2TB USB 3.0, eSata Plextor PX-LB950UE BluRay
    W520, i7-2760QM, BIOS 1.42 1920x1080 FHD, 32 GB RAM, 1000M NVIDIA GPU, Crucial M500 480GB mSata SSD, Hitachi 500GB HDD, WD 2TB USB 3.0

    What kind of software do you use for the conversion to pdf? Adobe Reader can't create pdf files.

  • Oracle 10gR2 LARGE PAGE SIZE on Windows 2008 x64 SP2

    Hello Oracle Experts,
    What are the advantages of Large Page Size and how would I know when my DB will benefit from Large Page Sizes?
    My undeqrstanding is on Windows x64 – 8kb default page size – will now be 2 MB. Will this speed up accesses to buffer cache? If so is there a latch wait that I can monitor before vs. after to verify that large page size has improved performance?
    My Database server has 256GB RAM and SGA is set to 180GB. I am quite sure the overhead involved in maintaining a large number of 8kb allocations (as opposed to 2MB) must be high - how can i monitor this?
    I am planning to follow the procedure here:
    http://download.oracle.com/docs/html/B13831_01/ap_64bit.htm#CHDGFJJD
    The DB is for SAP on a 8CPU/48 core IBM x3950. For some reason SAP documentation does not mention anything about this registry setting or even if Large Page Size is supported in the SAP world.
    Part 2 : I notice that more recent Oracle patch sets (example 25) turn NUMA OFF by default. Why is this and what is the impact of disabling NUMA on a system like x3950 (which is a NUMA based server)?
    My understanding is Oracle would no longer know that some memory is Local (and therefore fast) and some memory is Remote (therefore slow). Overall I am guessing this could cause a real performance issue on this DB.
    -points for detailed reply!
    thanks a lot -

    Hello
    Thanks for your reply. I am very interested to hear further about the limitations of Windows 2008 and the benefits of Oracle Linux.
    Generally we find that Windows 2008 has been pretty good, a big improvement over Windows 2003 (bluescreens don't occur ever etc)
    Can you advise further about Large Page Size for the buffer cache? I assume this applies on both Windows and Linux (I am guessing there is a similiar parameter for 10gR2 on Linux).
    SAP have not yet fully supported Oracle 11g so this is why 11g has not made it into the SAP world yet.
    Can you also please advise about NUMA? regardless of whether we run Linux or Windows this setting needs to be considered.
    Thanks

  • RAC  windows 2003 64bit xeon  - Large Pages

    Hi all
    i have 2 node (windows 2003 64bit dual core xeon, 8GB RAM)
    oracle recommendet use large pages on 64bit insted LOCK_SGA, but when i use large pages and i set my sga_target=5GB after few minute in EM i see alert (stronicowanie virtual memory) i dont knew how i can write this, mayby swapping.
    how can avoid this?
    how i can check that oracle use large pages?
    meyby some intresting links?
    thanks to advice

    You don't run a scalability option on a basically unscabable O/S like Winblows, do you?
    Sybrand Bakker
    Senior Oracle DBA

  • Maximum Large Pages

    Hi
    I have RHEL 6.4 with 128GB RAM
    I have big database
    The database is the only service on this box.
    what is the optimal number of large pages I can have in the box ?
    (to not disturb the OS )
    Tanks for your help

    > I have RHEL 6.4 with 128GB RAM
    > I have big database
    > The database is the only service on this box.
    > what is the optimal number of large pages I can have in the box ?
    Depends what you want.
    Based on the information you have supplied, I would recommend a HugePages value between 0 GB and 127 GB.
    How large is your application?
    What kind of application is it?
    What is the locality in its data access?  Sequential?  Random? 
    How long should any cached data be kept in memory just in case it is needed again?
    How much data does the application need to read and write?
    The DB install guide has some recommendations on calculating the size of the shared global area (SGA); use them.
    Do not enable Automatic Memory Management (AMM) because AMM is just a poor-man's HugePages.
    Does your DB use many table joins?  Few joins but lots and lots of data rows?
    Remember your SGA must fit entirely within the HugePages area else you will get massively-degraded performance.
    I have RHEL 6.4 with 128GB RAM
    I have big database
    The database is the only service on this box.
    what is the optimal number of large pages I can have in the box ?

  • Large Pages setting

    Hello all,
    We recently created a new 11.2.0.3 database on Red Hat Linux 5.7. It's running in ASMM
    Database settings
    sga_target set to 10G
    sga_max=12G
    memory_target=0.
    memory_max=0
    pga_aggregate_target =12G.
    Host Specs
    Total RAM = 128GB
    Total CPUs = 4 @ 2.27GHz
    Cores/CPU = 8
    During instance startup, we get the following message.
    ****************** Large Pages Information *****************
    Total Shared Global Region in Large Pages = 0 KB (0%)
    Large Pages used by this instance: 0 (0 KB)
    Large Pages unused system wide = 0 (0 KB) (alloc incr 32 MB)
    Large Pages configured system wide = 0 (0 KB)
    Large Page size = 2048 KB
    RECOMMENDATION:
    Total Shared Global Region size is 12 GB. For optimal performance,
    prior to the next instance restart increase the number
    of unused Large Pages by atleast 6145 2048 KB Large Pages (12 GB)
    system wide to get 100% of the Shared
    Global Region allocated with Large pages
    Has anyone seen this recommendation message during startup and acted upon it? if yes, what kind of modification was performed.
    Thanks for your time.

    From 11.2.0.2 new parameter was added i.e use_large_pages, now whenever the database instance is started it will check for huge pages configuration and there for produce the warning if Oracle will allocate part of the SGA with hugepages and the resting part with normal 4k pages.
    USE_LARGE_PAGES parameter has three possible values "true" (default), "only", and "false".
    The default value of "true" preserves the current behavior of trying to use hugepages if they are available on the OS. If there are not enough hugepages, small pages will be used for SGA memory.
    This may lead to ORA-4031 errors due to the remaining hugepages going unused and more memory being used by the kernel for page tables.
    Setting it to "false" means do not use hugepages
    A setting of "only" means do not start up the instance if hugepages cannot be used for the whole memory (to avoid an out-of-memory situation).
    There is not much written about this yet, but i'm able to find some docs in metalink and from blogs. Hope this help.
    Large Pages Information in the Alert Log [ID 1392543.1]
    USE_LARGE_PAGES To Enable HugePages In 11.2 [ID 1392497.1]
    NOTE:361323.1 - HugePages on Linux: What It Is... and What It Is Not...
    Bug 9195408 - DB STARTUP DOES NOT CHECK WHETHER HUGEPAGES ARE ALLOCATED- PROVIDE USE_HUGEPAGES
    http://agorbyk.wordpress.com/2012/02/19/oracle-11-2-0-3-and-hugepages-allocation/
    http://kevinclosson.wordpress.com/category/use_large_pages/
    http://kevinclosson.wordpress.com/category/oracle-automatic-memory-management/

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

  • Beginners guide screwed|Is impossible to edit large pages on the wiki

    When you're trying to edit a large page on the wiki you get this message:
    WARNING: This page is 40 kilobytes long; some browsers may have problems editing pages approaching or longer than 32kb. Please consider breaking the page into smaller sections.
    Firefox can handle the 40kb but not the 107kb of the Begginers Guide for example.
    And when you do a change, if you dont preview it you get a shiny blank page instead of the document.
    I was trying to add one line to the Oficcial installation guide and ended up breaking it into two parts (i move the apendix to another page) to be able to recover it's contents.
    Some other person form #archlinux that was helping me with this issue also accidentally override the text from the Beginners guide, and we can't roll back it. So, any WikiAdmin can roll back the changes on the Beginners guide? and tell us (the normal users) how to edit long pages.
    Thanks
    Last edited by __void__ (2009-01-21 16:34:47)

    __void__ wrote:
    When you're trying to edit a large page on the wiki you get this message:
    WARNING: This page is 40 kilobytes long; some browsers may have problems editing pages approaching or longer than 32kb. Please consider breaking the page into smaller sections.
    Firefox can handle the 40kb but not the 107kb of the Begginers Guide for example.
    And when you do a change, if you dont preview it you get a shiny blank page instead of the document.
    I was trying to add one line to the Oficcial installation guide and ended up breaking it into two parts (i move the apendix to another page) to be able to recover it's contents.
    Some other person form #archlinux that was helping me with this issue also accidentally override the text from the Beginners guide, and we can't roll back it. So, any WikiAdmin can roll back the changes on the Beginners guide? and tell us (the normal users) how to edit long pages.
    Thanks
    Some other person here, sorry about that accidental Beginner's guide trash up
    Mr.Elendig wrote:For future reference, don't edit the whole page at once, just edit a section of it. When you are logged in, every section have a 'edit' button/link.
    Got it!
    Last edited by zaggynl (2009-01-22 09:35:03)

Maybe you are looking for

  • Deposit on sales order can't include tax??

    I recently posted a thread on prepaid invoices and thought I had my problem resolved.  However after a little more testing I realized that B1 is not allowing me to enter a deposit amount for the full invoice amount (Total Before Discount and Tax amou

  • Absolute paths to eclipse project root? Why?

    I am a dedicated follower of the Flex and related technologies since its first appearance (some years back, Flex 1.5) Flash Builder (formerly Flex Builder) back then (version 2) seemed to have a problem with paths relative to project location. More p

  • BI portlet "Authentication error" on WebCenter

    Hi, all I'm trying to deploy BI reportUI portlet to WebCenter server. Following the introduction on BI document: 1. Download sawjsr168portlets.war 2. Edit portlet.xml and set oracle.bi.presentation.sawserver.URL, oracle.bi.presentation.portlets.jsr16

  • Apple TV not connecting

    Ever since I did the itunes update this week. I cannot access my itunes libraries from my 1st generation Apple TV. What can I do to solve this problem? I can only access what had upload before the update. My itunes connects with Apple TV through airp

  • Cropping to a non-rectangular shape

    How do I crop a photo to a circular or elliptical outline? The ellipse tool appears to eliminate everything inside it.