Using large pages on Solaris 10

Hello,
I�ve some problems to use large pages ( 16k, 512k, 4M ) on two Primepower650 systems. I�ve installed the most actual kernel 127111-05.
The pagesize -a command respond 4 page sizes ( 8k, 16k, 512k, 4M ). Even if I try the old manual method using LD_PRELOAD=mpss.so.1 and a mpss.conf file to force large pages the pmap -sx <pid> shows only 8k for stack,heap and anon. Only for shared memory 4M DISM segments are used. I didn�t receive any error message. Two other primepower systems with the same kernel release works as expected.
What can I do for further troubleshooting ? I�ve tried different kernel settings all without effect.
Best regards
JCJ

This problem is now ( paritially ) solved by the Fujitsu-Siemens edition of kernel patch 127111-08. The behaviour is now like Solaris 9 because large pages must be forced by LD_PRELOAD=mpss.so.1 and still didn�t work out of the box for this kind of cpu ( Sparc GP64 V only ). All available page sizes ( 8k, 64k, 512k and 4M ) can now be used by configuring the /etc/mpss.conf. Unfortunally large pages out-of-the-box are not working on this kind of cpu and the actual kernel patch. This is not so nice because on large systems with a lot of memory and a lot of large processes there may be still a lot of TLB misses. So I still wait and test further as soon as new kernel patches are available.
JCJ

Similar Messages

  • How to use large pages in AIX with oracle

    Hi,
    i'm trying to convince oracle to use large pages on AIX 5.3 but haven't
    suceeded so far.
    i set v_pinshm=1, maxpin%=80, lgpg_regions=448 and lgpg_size=16777216
    using 'vmo' and 'LOCK_SGA=true' in spfile. after rebooting and starting
    the instance 'svmon' shows no no large pages in use:
    # svmon
    size inuse free pin virtual
    memory 4046848 3711708 335140 2911845 1503108
    pg space 524288 5551
    work pers clnt lpage
    pin 1076604 0 233 1835008
    in use 1503010 0 373690 0
    pgsize size free
    lpage pool 16 MB 448 448
    SGA size is 3G. why doesn't oracle use large pages? i already have
    created a TAR but maybe an oracle-on-AIX expert can help me faster than
    oracle support :)
    regards,
    -ap

    Nice day Andreas,
    please do you have some solution for your issue, because I think I have similar problem,
    I set on AIX
    vmo -r -o lgpg_regions=192 -o lgpg_size=16777216
    vmo -o v_pinshm=1
    I see in svmon -G this output
    PageSize PoolSize inuse pgsp pin virtual
    s 4 KB - 1104431 2456 685141 874874
    L 16 MB 192 0 0 192 0
    these (16MB large pages) are free but when I want run BEA WebLogic server run with parametr -Xlp the application is run with standard 4K memory pages,
    thank you very much for any other hint,

  • Error: Java HotSpot(TM) 64-Bit Server VM warning: JVM cannot use large page

    Hi,
    i recently came across Error Message (coming up in webadmin Log view):
    ProcessMonitor: Java HotSpot(TM) 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory
    when running Oracle NoSQL on Windows 7 64bit Home Premium system, having 8GB of physical RAM.
    I created the store configuration without explicitly passing a value for parameter memory_mb (was set to -memory_mb 0), so that replication group would take as much
    memory as possible, which found it's reflection in following line from store log:
    Creating rg1-rn1 on sn1 haPort=tangel-lapp:5.011 helpers=tangel-lapp:5011 mountpoint=null heapMB=7.007 cacheSize=5.143.454.023 -XX:ParallelGCThreads=4
    I was a bit surprised because of the fact that i definetly succeeded in running kvstore in this configuration, leading to store using 7007MB of memory.
    It was only possible to run kvstore when creating store configuration with memory configured to be less than 2GB.
    After searching google with error message mentioned above i came across some hints regarding activation of Huge Pages on Windows 7, which mentioned that it could be
    done on systems having at least Windows 7 Professional Edition.
    But finally i found a more helpful hint, referring to size of windows pagefile. As my machine uses an SSD as system disk and there some notes on deactivating pagefile when
    using an SSD, i did so some time ago.
    So i activated the pagefile on windows again and after doing so, the store came up without any problems.
    Maybe it's nothing new to some of you guys, but as there was nothing to be read about this fact in neither admin nor getting started guide, i just wanted to share this
    piece of information with you.
    Regards
    Thomas

    Charles,
    thanks for clarification, obviously i've overseen that in chapter "Installation prerequisites" of
    Admin Guide.
    By the way, are there any specific OS related settings, let's say "best practice" similar as it is
    referred to in documentation of "classic" Oracle Databases?
    Regards
    Thomas

  • Using hugepages (Large Pages)

    I am trying to leverage this feature in TimesTen v7.0.5 running SUSE Linux v 2.6.5-7.244-smp (gcc version 3.3.3) but with no success.
    I enabled hugepages in Linux:
    vm.nr_hugepages=1024
    vm.disable_cap_mlock=1
    cat /proc/meminfo shows:
    HugePages_Total: 1024
    HugePages_Free: 1024
    Hugepagesize: 2048 kB
    I followed the instructions and set -linuxLargePageAlignment in the daemon options file (ttendaemon.options):
    -linuxLargePageAlignment 2
    However when I start TimesTen, first the daemon and then manual load the database in Ram, I did not see it use any hugepages. cat /proc/meminfo still shows HugePages_Free: 1024.
    I also set /etc/security/limits.conf file properly. However, it differs with documentation is that I did not set /proc/sys/vm/hugetlb_shm_group since I have a older Linux kernel that hasn't support this param. Instead, I set vm.disable_cap_mlock = 1.
    I appreciate any help you can offer.
    Thanks,
    Wing

    Hi Brian,
    Use of large pages does not impact other parameters within TimesTen; you don't need to change anything with regard to stuff like hash index pages etc. since these are TimesTen 'internal' pages not O/S memory pages. I also do not think use of large pages impacts other kernel parameters such as shmall (I don't recall ever havign to change those when using large pages).
    In order for TimesTen to use large pages several things are required:
    1. The kernel must be configured to enable a suitable number of large pages (vm/nr_hugepages). The way you are doing this is fine but obviously for 'production' use you would configure this in /etc/sysctl.conf (or equivalent)
    2. The TT daemon must be told to use large pages via the -largePageALignment option (you are doing that too).
    3. The number of large pages configured must be large enough to contain the entire TT datastroe segment (PermSize+TempSize+LogBuffSize+~20 Mb). You can see the exact size of the segment via 'ipcs -m'.
    4. The kernel parameter vm/hugetlb_shm_group must be set to the O/S groupid which the user under which the TT main daemon is executing is a member of. For example, I havea group called timesten (gid = 1000) and the Tt instance administrator user is a member og that group. I have vm.hugetlb_shm_group = 1000.
    With all this in place my datastores use large pages with no problem (as confirmed by cat /proc/meminfo after the datastore is loaded). This is all you need to do, nothing more.
    Chris

  • Large pages are not used when databases are started from srvctl on ibm aix

    Hi,
    RAC 11.2.0.3.2 on IBM AIX 6.1 TL07. Mixed databases 10.2.0.5.4 and 11.2.0.3.2
    When started with SQLPLUS, the database instance use LARGE PAGES
    When started with SRVCTL, the database instance doesn't use LARGE PAGES, even root has the right capabilities
    lsuser -a capabilities root
    root capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
    The same behaviour is observed for 10.2.0.5.4 and 11.2.0.3.2 databases.
    ORACLE_SGA_PGSZ=16m is exported in /etc/profile.
    I found that :
    1) When the node is restarted (shutdown -r now), clusterware and all databases (database 10gr2, 11gr2 , asm, ...) start automaticaly WITHOUT using LARGE PAGES. After this, databases restarted with SQLPLUS are USING LARGE PAGES, and databases restarted with SRVCTL NEVER use LARGE PAGES.
    2) When the clusterware is restarted manualy (crsctl stop|start crs) all databases (database 10gr2, 11gr2 , asm, ...) start automaticaly and all instances (10gr2 and 11gr2) are USING the LARGE PAGES. After this, database are ALWAYS using LARGE PAGES (started with SQLPLUS or SRVCTL)
    So, this problem occurs only after a restart of the host. All is fine after a manual restart of the clusterware.
    my references
    SGA Not Pinned In The AIX Large Pages When Instance Is Started With Srvctl [ID T369424.1]
    How to enable Large Page Feature on AIX-Based Systems [ID 372157.1]
    Thank's for your help

    The problem here is for IBM AIX (I dont find the equivalent for ulimit -l max locked memory, in ulimit for AIX)
    Here a portion of the original /etc/ohasd script (ulimit -c and -n are already set for AIX)
    start_stack()
    # see init.ohasd.sbs for a full rationale
    case $PLATFORM in
    Linux)
    # MEMLOCK limit is for Bug 9136459
    ulimit -l unlimited
    ulimit -c unlimited
    ulimit -n 65536
    ulimit -c unlimited
    ulimit -n 65536
    esac
    So i put a trace
    start_stack()
    # see init.ohasd.sbs for a full rationale
    case $PLATFORM in
    Linux)
    # MEMLOCK limit is for Bug 9136459
    ulimit -l unlimited
    ulimit -c unlimited
    ulimit -n 65536
    ulimit -c unlimited
    ulimit -n 65536
    echo "$(ulimit -a)" > /tmp/coh.log
    echo "$(lsuser -a capabilities root)" >> /tmp/coh.log
    esac
    here is the result :
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) unlimited
    stack(kbytes) 4194304
    memory(kbytes) unlimited
    coredump(blocks) unlimited
    nofiles(descriptors) 65536
    threads(per process) unlimited
    processes(per user) unlimited
    root capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
    i continue investigating this way ...

  • Use of MPSS on Solaris 9 and Java 141_03 - not getting 4M pagesizes

    Hi all,
    Anyone know how to get MPSS actually using large page sizes in 1.4 / SunOS 5.9 ??
    I have a 1.4.1_03-b02 JVM that is using the -XX:+UseMPSS option and using the LD_PRELOAD=/usr/lib/mpss.so.1 and MPSSHEAP=4M but when I use pmap -Fxs <PID> I always see 8k pages. My system is 5.9 Generic_122300-03 sun4u sparc SUNW,Sun-Fire-480R and pagesize -a give me:
    8192
    65536
    524288
    4194304
    so 4M should be OK to use...
    The full JVM options are:
    -XX:+TraceClassUnloading -XX:+UseParallelGC -XX:+UseMPSS -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=2 -XX:MaxTenuringThreshold=3 -XX:+DisableExplicitGC -Dsun.rmi.server.exceptionTrace=true -Xloggc:gc.log -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -server -ms2560m -mx2560m -Xmn1024m -Dsun.rmi.dgc.client.gcInterval=14400000 -Dsun.rmi.dgc.server.gcInterval=14400000
    I have also tried using LD_PRELOAD_32 and LD_PRELOAD_64 but still only see 8k pages in pmap for the heap...
    Thanks for any ideas, if I read the doc I should not need to do anything to use the MPSS option on SunOS 5.9...so maybe one of my other JVM options is preventing MPSS from being used?

    OK, bug 4845026 is giving me a clue:
    Bug ID:      4845026
    Votes      1
    Synopsis      MPSS broken on JDK 1.4.1_02
    Category      hotspot:jvm_interface
    Reported Against      1.4.1_02
    Release Fixed      
    State      Closed, will not be fixed
    Related Bugs      
    Submit Date      08-APR-2003
    Description      
    I am running SPECjAppServer2002 with WebLogic Server 8.1 and JDK 1.4.1_02.
    Here is the version of JDK 1.4.1_02 that I am using:
    <gar07.4> /export/VMs/j2sdk1.4.1_02/bin/java -version
    java version "1.4.1_02"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1_02-b06)
    Java HotSpot(TM) Client VM (build 1.4.1_02-b06, mixed mode)
    The system is a V240 with solaris S9U3:
    <gar07.5> more /etc/release
    Solaris 9 4/03 s9s_u3wos_04 SPARC
    Copyright 2003 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 16 December 2002
    After rebooting the system I use the following command line to start the appserver:
    + /export/VMs/j2sdk1.4.1_02/bin/java -server -verbose:gc -XX:+PrintGCTimeStamps
    -XX:+UseMPSS -XX:+AggressiveHeap -Xms3500m -Xmx3500m -Xmn600m -Dweblogic.oci.sel
    ectBlobChunkSize=1600 -classpath ...
    The process should have some annon segments mapped to 4M, but it doesn't:
    <gar07.7> ps -ef | grep java
    ecuser 541 533 12 10:30:33 ? 0:50 /export/VMs/j2sdk1.4.1_02/bin/java -server -verbose:gc -XX:+PrintGCTimeStamps -
    ecuser 566 343 0 10:31:24 pts/1 0:00 grep java
    <gar07.8> pmap -s 541 | grep 4M
    <gar07.9>
    If I do exactly the same using JDK 1.4.2 instead of JDK1.4.1_02 I am able to get
    4M pages. Here is the command line for 1.4.2:
    + /export/VMs/j2sdk1.4.2/bin/java -server -verbose:gc -XX:+PrintGCTimeStamps -XX
    :+PrintGCDetails -XX:+AggressiveHeap -Xms3500m -Xmx3500m -Dweblogic.oci.selectBl
    obChunkSize=1600 -classpath ...
    And here are my 4M pages:
    <gar07.20> pmap -s `pgrep java` | grep 4M
    1AC00000 282624K 4M rwx-- [ anon ]
    F5800000 16384K 4M rwx-- [ anon ]
    F6800000 4096K 4M rwx-- [ anon ]
    F6C00000 4096K 4M rwx-- [ anon ]
    F7000000 4096K 4M rwx-- [ anon ]
    F9C00000 4096K 4M rwx-- [ anon ]
    Without large pages the time spent in TLB misses for this benchmark is 25% (!)
    Using 4M pages that time is reduce to 3%. WLS8.1 was certified with 1.4.1_02 so
    we cannot use 1.4.2 for the benchmark.
    thanks for your help,
    Fernando Castano
    Posted Date : 2006-04-27 23:04:32.0
    Work Around      
    N/A
    Evaluation      
    Mukesh,
    Can you get someone to look into back porting this fix. Please see below
    attachment for additional info. 4845026 : (P1/S1) New Hotbug Created
    Is a new bug that only exists in JDk 1.4.1_x release. Its fixed in 1.4.2
    release from code related to bug 4737603.
    Thanks Jane & James for the heads up.
    Thanks
    Gary Collins
    Gary,
    I think the bug James referred to is
    4737603 Using MPSS with Parallel Garbage Collection doesn't yield 4mb
    pages
    which was fixed in mantis (according to the bug report).
    Looks like a simple fix to back-port.
    Jane
    xxxxx@xxxxx 2003-04-10
    This problem is partially because of bug 4737603, mainly because there is code cache mapping to large page in 1.4.1(4772288: New MPSS in mantis). This part of code will be ported into 1.4.1 from mantis.
    xxxxx@xxxxx 2003-04-18
    There's 2 things. MPSS wasn't used in the parallel GC collector AND not
    used for the code cache. Both need to be addressed.
    xxxxx@xxxxx 2003-04-21

  • Swapping performance, and large pages

    I am trying to run glpsol (the standalone solver from the GNU Linear Programming Kit) on a very large model. I don't have enough physical memory to fit the entire model, so I configured a lot of swap. glpsol, unfortunately, uses more memory to parse and preprocess the model than it does to actually run the core solver, so my approximately 2-3GB model requires 11GB of memory to get started. (However, much of this acess is sequential.)
    What I am encountering is that my new machine, running Solaris 10 (11/06) on a dual-core Athlon (64-bit, naturally) with 2GB or memory, is starting up much, much more slowly than my old desktop machine, running Linux (2.6.3) on a single-core Athlon 64 with 1GB of memory. Both machines are using identical SATA drives for swap, though with different motherboard controllers. The Linux machine gets started in about three hours, while Solaris takes 9 hours or more.
    So, here's what I've found out so far, and tried.
    On Solaris, swapping takes place 1 page (4KB) at a time. You can see from this example iostat output that I'm getting about 6-7ms latency from the disk but that each of the reads is just 4KB. (629KB/s / 157 read/s = 4KB/read )
    device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
    cmdk0      157.2   14.0  628.8  784.0  0.1  1.0    6.6   2  99
    cmdk1        0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
    sd0          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0Linux has a feature called page clustering which swaps in multiple 4KB pages at once--- currently set to 8 pages (32KB).
    Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
    hda            1270.06     2.99  184.23    6.39 11635.93    76.65    61.45     1.50    7.74   5.21  99.28
    hdc               0.00     0.00    0.40    0.20     4.79     1.60    10.67     0.00    0.00   0.00   0.00
    md0               0.00     0.00    1.00    0.00    11.18     0.00    11.20     0.00    0.00   0.00   0.00
    hdg               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00(11636 sectors/sec = 5818KB/sec. Divided by 184 reads/sec gives just under 32KB.)
    I didn't find anything I could tune in the Solaris kernel that would increase the granularity at which pages are swapped to disk.
    I did find that Solaris supports large pages (2MB on x64, verified with "pagesize -a"), so I modified glpsol to use larger chunks (16MB) for its custom allocator and used memalign to allocate these chunks at 2MB boundaries. Then I rebooted the system and ran glpsol with
    ppgsz -o heap=2MB glpsol ...I verified with pmap -s that 2MB pages were being used, but only a very few of them.
    8148:   glpsol --cpxlp 3cljf-5.cplex --output solution-5 --log log-5
             Address       Bytes Pgsz Mode   Mapped File
    0000000000400000        116K    - r-x--  /usr/local/bin/glpsol
    000000000041D000          4K   4K r-x--  /usr/local/bin/glpsol
    000000000041E000        432K    - r-x--  /usr/local/bin/glpsol
    0000000000499000          4K    - rw---  /usr/local/bin/glpsol
    0000000000800000      25556K    - rw---    [ heap ]
    00000000020F5000        944K   4K rw---    [ heap ]
    00000000021E1000          4K    - rw---    [ heap ]
    00000000021E2000         68K   4K rw---    [ heap ]
    00000000021F3000          4K    - rw---    [ heap ]
    00000000087C3000          4K   4K rw---    [ heap ]
    00000000087C4000       2288K    - rw---    [ heap ]
    0000000008A00000       2048K   2M rw---    [ heap ]
    0000000008C00000       2876K    - rw---    [ heap ]
    0000000008ECF000        480K   4K rw---    [ heap ]
    0000000008F47000          4K    - rw---    [ heap ]
    000000003F4E8000          4K   4K rw---    [ heap ]
    000000003F4E9000       5152K    - rw---    [ heap ]
    000000003F9F1000         60K   4K rw---    [ heap ]
    000000003FA00000       2048K   2M rw---    [ heap ]
    000000003FC00000       6360K    - rw---    [ heap ]
    0000000040236000        368K   4K rw---    [ heap ]
    etc.There are only 19 large pages listed (a total of 38MB of physical memory.)
    I think my next step, if I don't receive any advice, is to try to preallocate the entire region of memory which stores (most of) the model as a single allocation. But I'd appreciate any insight as to how to get better performance, without a complete rewrite of the GLPK library.
    1. When using large pages, is the entire 2MB page swapped out at once? Or is the 'large page' only used for mapping in the TLB? The documentation I read on swap/paging and on large pages didn't really explain the interaction. (I wrote a dtrace script which logs which pages get swapped into glpsol but I haven't tried using it to see if any 2MB pages are swapped in yet.)
    2. If so, how can I increase the amount of memory that is mapped using large pages? Is there a command I can run that will tell me how many large pages are available? (Could I boot the kernel in a mode which uses 2MB pages only, and no 4KB pages?)
    3. Is there anything I should do to increase the performance of swap? Can I give a hint to the kernel that it should assume sequential access? (Would "madvise" help in this case? The disk appears to be 100% active so I don't think adding more requests for 4KB pages is the answer--- I want to do more efficient disk access by loading bigger chunks of data.)

    I suggest posting this to the opensolaris performance discussion group. Also it would be useful to know if the application is 32 bit or 64 binary.

  • Jvm startup fails with error when using large -Xmx value

    I'm running JDK 1.6.0_02-b05 on RHEL5 server. I'm getting error when starting the JVM with large -Xmx value. The host has ample memory to succeed yet it fails. I see this error when I'm starting tomcat with a bunch of options but found that it can be easily reproduced by starting the JVM with -Xmx2048M and -version. So it's this boiled down test case that I've been examining more closely.
    host% free -mt
    total used free shared buffers cached
    Mem: 6084 3084 3000 0 184 1531
    -/+ buffers/cache: 1368 4716
    Swap: 6143 0 6143
    Total: 12228 3084 9144
    Free reveals the host has 6 GB of RAM, approximately half is available. Swap is totally free meaning I should have access to about 9 GB of memory at this point.
    host% java -version
    java version "1.6.0_02"
    Java(TM) SE Runtime Environment (build 1.6.0_02-b05)
    Java HotSpot(TM) Server VM (build 1.6.0_02-b05, mixed mode)
    java -version succeeds
    host% java -Xmx2048M -version
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.
    java -Xmx2048M -version fails. Trace of this reveals mmap call fails.
    mmap2(NULL, 2214592512, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
    Any ideas?

    These are the relevant java options we are using:
    -server -XX:-OmitStackTraceInFastThrow -XX:+PrintClassHistogram -XX:+UseLargePages -Xms6g -Xmx6g -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=128m -XX:MaxPermSize=192m -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+ExplicitGCInvokesConcurrent -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.awt.headless=true
    This is a web application that is very dynamic and uses lots of database calls to build pages. We use a large clustered cache to reduce trips to the database. So being able to acces lots of memory is important to our application.
    I'll explain some of the more uncommon options:
    We use the Concurrent Garbage collector to reduce stop the world GC's. Here are the CMS options:
    -XX:+UseConcMarkSweepGC
    -XX:+CMSClassUnloadingEnabled
    -XX:+CMSPermGenSweepingEnabled An explicit coded GC invokes the Concurrent GC instead of the stop the world GC.
    -XX:+ExplicitGCInvokesConcurrentThe default PermSizes where not large enough for our application. So we increased them.
    -XX:PermSize=128m
    -XX:MaxPermSize=192mWe had some exceptions that were omitting their stack traces. This options fixes that problem:
    -XX:-OmitStackTraceInFastThrowWe approximate between 10% to 20% performance improvement with Large Page support. This is an advance feature.
    -XX:+UseLargePagesUseLargePages requires OS level configuration as well. In SUSE10 we configured the OS's hugepages by executing
    echo "vm.nr_hugepages = 3172" >> /etc/sysctl.confand then rebooting. kernel.shmmax may also need to be modified. If you use Large Page be sure to google for complete instructions.
    When we transitioned to 64bit we transitioned from much slower systems having 4GB of ram to much faster machines with 8GB of ram, so I can't answer the question of degraded performance, however with our application, the bigger our cache the better our performance, so if 64bit is slower we more than make up for it being able to access more memory. I bet the performance difference depends on the applications. You should do your own profiling.
    You can run both the 32bit version and the 64bit version on most 64bit OSes. So if there is a significant difference run the version you need for the application. For example if you need the memory use the 64bit version if you don't then use the 32bit version.

  • RAC  windows 2003 64bit xeon  - Large Pages

    Hi all
    i have 2 node (windows 2003 64bit dual core xeon, 8GB RAM)
    oracle recommendet use large pages on 64bit insted LOCK_SGA, but when i use large pages and i set my sga_target=5GB after few minute in EM i see alert (stronicowanie virtual memory) i dont knew how i can write this, mayby swapping.
    how can avoid this?
    how i can check that oracle use large pages?
    meyby some intresting links?
    thanks to advice

    You don't run a scalability option on a basically unscabable O/S like Winblows, do you?
    Sybrand Bakker
    Senior Oracle DBA

  • Can large page improve timesten performance on aix??

    hello, chris:
    Can large page imporove timesten performance on aix? we havenot test it yet, I just want to know whether it is ok?? thank you.

    Enabling large page support on AIX may help performance under some circumstances (typically with large datastores). TimesTen does support use of large pages on AIX and no special action is needed on the TimesTen side to utilise large pages. Here is some information on this which should eventually appear in the TimesTen documentation.
    The TimesTen shared memory segment for AIX has been created with the flags
    ( SHM_PIN | SHM_LGPAGE) necessary for large page support for many releases.
    It used to be the case that special kernel flags needed to be enabled so it
    was never documented as being supported. Subsequent AIX releases have made
    enabling large pages a matter of system administration.
    1) Enable capabilities (chuser)
    2) Configure page pool (vmo -r)
    3) Enable memory pining (vmo -p)
    4) [restart timesten daemons]
    The documentation is quoted below.
    AIX maintains separate 4 KB and 16 MB physical memory pools. You can
    specify the amount of physical memory in the 16 MB memory pool using the vmo
    command. Starting with AIX 5.3, the large page pool is dynamic, so the amount
    of physical memory that you specify takes effect immediately and does not
    require a system reboot. The remaining physical memory backs the 4 KB virtual
    pages.
    AIX treats large pages as pinned memory. AIX does not provide paging
    support for large pages. The data of an application that is backed by large
    pages remains in physical memory until the application completes. A security
    access control mechanism prevents unauthorized applications from using large
    pages or large page physical memory. The security access control mechanism
    also prevents unauthorized users from using large pages for their
    applications. For non-root user ids, you must enable the CAP_BYPASS_RAC_VMM
    capability with the chuser command in order to use large pages. The following
    example demonstrates how to grant the CAP_BYPASS_RAC_VMM capability as the
    superuser:
    # chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE <user id>
    The system default is to not have any memory allocated to the large page
    physical memory pool. You can use the vmo command to configure the size of
    the large page physical memory pool. The following example allocates 4 GB to
    the large page physical memory pool:
    # vmo -r -o lgpg_regions=64 -o lgpg_size=16777216
    To use large pages for shared memory, you must enable the SHM_PIN shmget()
    system call with the following command, which persists across system reboots:
    # vmo -p -o v_pinshm=1
    To see how many large pages are in use on your system, use the vmstat -l
    command as in the following example:
    # vmstat -l
    kthr memory page faults cpu large-page
    r b avm fre re pi po fr sr cy in sy cs us sy id wa alp flp
    2 1 52238 124523 0 0 0 0 0 0 142 41 73 0 3 97 0 16 16
    From the above example, you can see that there are 16 active large pages, alp, and
    16 free large pages, flp.
    Documentation is at:
    http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibmaix.prftungd/doc/prftungd/large_page_ovw.htm
    Chris

  • Large page sizes on Solaris 9

    I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
    # uname -a
    SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
    I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
    and
    "Taming Your Emu to Improve Application Performance (February 2004)"
    http://www.sun.com/blueprints/0204/817-5489.pdf
    The machine claims it supports 4M page sizes:
    # pagesize -a
    8192
    65536
    524288
    4194304
    I've written a very simple program:
    main()
    int sz = 10*1024*1024;
    int x = (int)malloc(sz);
    print_info((void**)&x, 1);
    while (1) {
    int i = 0;
    while (i < (sz/sizeof(int))) {
    x[i++]++;
    I run it specifying a 4M heap size:
    # ppgsz -o heap=4M ./malloc_and_sleep
    address 0x21260 is backed by physical page 0x300f5260 of size 8192
    pmap also shows it has an 8K page:
    pmap -sx `pgrep malloc` | more
    10394: ./malloc_and_sleep
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- malloc_and_sleep
    00020000 8 8 8 - 8K rwx-- malloc_and_sleep
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 6288 6288 6288 - 8K rwx-- [ heap ]
    (The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
    I'm running this as root.
    In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
    Here's the output from sysinfo:
    General Information
    Host Name is machinename.lucent.com
    Host Aliases is loghost
    Host Address(es) is xxxxxxxx
    Host ID is xxxxxxxxx
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Manufacturer is Sun (Sun Microsystems)
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    System Model is Blade 1000
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    ROM Version is OBP 4.10.11 2003/09/25 11:53
    Number of CPUs is 2
    CPU Type is sparc
    App Architecture is sparc
    Kernel Architecture is sun4u
    OS Name is SunOS
    OS Version is 5.9
    Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Kernel Information
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    SysConf Information
    Max combined size of argv[] and envp[] is 1048320
    Max processes allowed to any UID is 29995
    Clock ticks per second is 100
    Max simultaneous groups per user is 16
    Max open files per process is 256
    System memory page size is 8192
    Job control supported is TRUE
    Savid ids (seteuid()) supported is TRUE
    Version of POSIX.1 standard supported is 199506
    Version of the X/Open standard supported is 3
    Max log name is 8
    Max password length is 8
    Number of processors (CPUs) configured is 2
    Number of processors (CPUs) online is 2
    Total number of pages of physical memory is 262144
    Number of pages of physical memory not currently in use is 4368
    Max number of I/O operations in single list I/O call is 4096
    Max amount a process can decrease its async I/O priority level is 0
    Max number of timer expiration overruns is 2147483647
    Max number of open message queue descriptors per process is 32
    Max number of message priorities supported is 32
    Max number of realtime signals is 8
    Max number of semaphores per process is 2147483647
    Max value a semaphore may have is 2147483647
    Max number of queued signals per process is 32
    Max number of timers per process is 32
    Supports asyncronous I/O is TRUE
    Supports File Synchronization is TRUE
    Supports memory mapped files is TRUE
    Supports process memory locking is TRUE
    Supports range memory locking is TRUE
    Supports memory protection is TRUE
    Supports message passing is TRUE
    Supports process scheduling is TRUE
    Supports realtime signals is TRUE
    Supports semaphores is TRUE
    Supports shared memory objects is TRUE
    Supports syncronized I/O is TRUE
    Supports timers is TRUE
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Device Information
    SUNW,Sun-Blade-1000
    cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    Does anyone have any idea as to what the problem might be?
    Thanks in advance.
    Mike

    I ran your program on Solaris 10 (yet to be released) and it works.
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- mm
    00020000 8 8 8 - 8K rwx-- mm
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 8192 8192 8192 - 4M rwx-- [ heap ]
    I think you don't this patch for Solaris 9
    i386 114433-03
    sparc 113471-04
    Let me know if you encounter problem even after installing this patch.
    Saurabh Mishra

  • How to find memory used by page tables

    Is there a way to find out how much memory is currently being used by page tables? I am new to Solaris ;-)
    I want to quantify the advantages of Intimate Shared Memory (in the context of a large Oracle database with lots of concurrent users). I want to contrast this against Linux which does not have a method of allowing different processes to share page tables that map onto shared memory. Thus, with a large number of concurrent connections where each connection creates a new process that maps onto the Oracle shared memory, a significant amount of memory can be consumed just by the page table entries.

    Yes, a very recent acquisition :-) ......I'm busy working my way through it, just taught myself about mdb today but I still can't figure how to get the amount of memory being used by page tables only. The dcmd ::memstat can give me the total amount of memory being used by the kernel but I would like some more detail than that.
    On Linux you can simply look at /proc/meminfo and it contains a wealth of information. I was hoping Solaris would be similar....but isn't that always the case when doing something new? We hope it's like what we already know :-) Below is an example from Linux showing that 1860 kB have been to store page table entries.
    [root@makalu ~]# cat /proc/meminfo | grep PageTables
    PageTables: 1860 kB
    [root@makalu ~]#
    Edited by: BrettSchroeder on Mar 12, 2008 1:37 PM

  • Total Shared Global Region in Large Pages = 0 KB (0%)

    Hi ,
    Am working on Oracle Database 11.2.0.3 ,
    Application 12.1.3.
    i see this message in the alert log file :
    ****************** Large Pages Information *****************
    Total Shared Global Region in Large Pages = 0 KB (0%)
    Large Pages used by this instance: 0 (0 KB)
    Large Pages unused system wide = 0 (0 KB) (alloc incr 32 MB)
    Large Pages configured system wide = 0 (0 KB)
    Large Page size = 2048 KB
    RECOMMENDATION:
    Total Shared Global Region size is 12 GB. For optimal performance,
    prior to the next instance restart increase the number
    of unused Large Pages by atleast 6145 2048 KB Large Pages (12 GB)
    system wide to get 100% of the Shared
    Global Region allocated with Large pages
    What should i do ?
    Thanks

    You definitely are not using hugepagesd. That's what the message you mentioned above is telling you:
    Total Shared Global Region in Large Pages = 0 KB (0%)It very clearly tells you that you have 0KB or 0% is in large pages.
    Note that the terms "large pages" and "hugepages" are synonymous. In Linux, they're called hugepages.
    Also, at the O/S level, you can do:
    cat /proc/meminfoTo see how many hugepages are allocated/free/reserved.
    Hope that helps,
    -Mark

  • How to generate localized chars using code point in Solaris 10?

    Hi All,
    Do enybody know how to generate localized chars (for example Japanese) using code points in Solaris 10?
    Like in the following web page:
    http://www.isthisthingon.org/unicode/index.phtml
    Unicode, shift-jis
    U+4e2f 87a3 �N
    U+4e3b 8ee5 ��
    U+4e3c 98a5 �S
    U+4f5c 8dec ��
    Thanks,
    Daniel

    I have found a "Code Point Input Method" tool in the following page:
    http://java.sun.com/products/jfc/tsc/articles/InputMethod/inputmethod.html
    Using this tool user can enter hexadecimal code point and the output is some char.
    But this tool doesn't work. I run this in the follwoing way:
    # java -jar CodePointIM.jar
    After this error message appers:
    Failed to load Main-Class manifest attribute from
    codepointim.jar
    If anybody could help I will be appreciate.
    Regards,
    Daniel

  • Large page format PDF Cropping

    Hi I have recently upgraded to  CS5. I am using a Mac Quad Core Intel Xeon with Snow lepoard OS.
    My problem  is that im trying to PDF a large page size from Indesign, i go through  the usual chanel of print setup, i click on Manage Custom Sizes
    Enter the  new size, return to the setup window ensure every thing is to 100 %  scale crop mark etc and save as post script. i then take the document  into distiller. The output is an A3 cropped version of the bottom left  corner.
    I  have read Snow lepoard is not dependant on the PDF Wrtiter print drive  any more? so should i use my Laser printer driver with the bespoke page  size? there seems to be little or contradictory information on this  subject.
    If  any one could offer some insight i would be most grateful. Thanks

    This is really an InDesign issue and not PDF, but I'll answer.
    Use File->Export to create the PDF.
    Creation of PDF from Creative Suite applications should ALWAYS be done using Export/Save As and NEVER using printing.

Maybe you are looking for

  • Enhancing the Interface of a Function Module

    Per SAP note 352697, I am trying to enhance the interface of module EXIT_SAPLPC32_003 via transaction SE37. I receive a message "Object FUGR XPCA cannot be enhanced; software component unknown cannot be enhanced". Is there a way to enhance this witho

  • Disp+work.exe  in yellow!!

    Hi, I have recently installed SAP R/3 4.7 in Windows 2000 Server with SAP DB as database. When i try to client copy from 000, my system got shutdown. When i try to start SAP again, the dispwork.exe is in yellow and says, "dispwork.exe running but dia

  • Rdp 2008r2 with Nvidia

    hi; I am tryin like 4 days or more make it work with nvidia quadro on rdp for my domain users.. i was found some articles on google but none of them make it work. so many group policy stuff about remote desktop services.but i cant make it work. my us

  • Can't set up my mail account.  Virgin info says it should be POP Mac says it is IMAP

    Hi I am trying to set up my email account.  I have recently changed from windows to a mac.  Virgin media say it should be a POP account  - Mac says IMAP on account type.  I have at least three mail accounts now and non of them work.  How can I re-sta

  • Can i read cdfs cd-discs with pictures on mac?

    Hello I moved from PCs to MacBook Pro and I cannot read discs burnt on my previous PC. Those discs have CDFS file system and contain my archieve of photos in JPEG, nothing more. Is there any plugin, software, driver, hack, etc to read cdfs? Thank you