Large Pages setting

Hello all,
We recently created a new 11.2.0.3 database on Red Hat Linux 5.7. It's running in ASMM
Database settings
sga_target set to 10G
sga_max=12G
memory_target=0.
memory_max=0
pga_aggregate_target =12G.
Host Specs
Total RAM = 128GB
Total CPUs = 4 @ 2.27GHz
Cores/CPU = 8
During instance startup, we get the following message.
****************** Large Pages Information *****************
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 32 MB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total Shared Global Region size is 12 GB. For optimal performance,
prior to the next instance restart increase the number
of unused Large Pages by atleast 6145 2048 KB Large Pages (12 GB)
system wide to get 100% of the Shared
Global Region allocated with Large pages
Has anyone seen this recommendation message during startup and acted upon it? if yes, what kind of modification was performed.
Thanks for your time.

From 11.2.0.2 new parameter was added i.e use_large_pages, now whenever the database instance is started it will check for huge pages configuration and there for produce the warning if Oracle will allocate part of the SGA with hugepages and the resting part with normal 4k pages.
USE_LARGE_PAGES parameter has three possible values "true" (default), "only", and "false".
The default value of "true" preserves the current behavior of trying to use hugepages if they are available on the OS. If there are not enough hugepages, small pages will be used for SGA memory.
This may lead to ORA-4031 errors due to the remaining hugepages going unused and more memory being used by the kernel for page tables.
Setting it to "false" means do not use hugepages
A setting of "only" means do not start up the instance if hugepages cannot be used for the whole memory (to avoid an out-of-memory situation).
There is not much written about this yet, but i'm able to find some docs in metalink and from blogs. Hope this help.
Large Pages Information in the Alert Log [ID 1392543.1]
USE_LARGE_PAGES To Enable HugePages In 11.2 [ID 1392497.1]
NOTE:361323.1 - HugePages on Linux: What It Is... and What It Is Not...
Bug 9195408 - DB STARTUP DOES NOT CHECK WHETHER HUGEPAGES ARE ALLOCATED- PROVIDE USE_HUGEPAGES
http://agorbyk.wordpress.com/2012/02/19/oracle-11-2-0-3-and-hugepages-allocation/
http://kevinclosson.wordpress.com/category/use_large_pages/
http://kevinclosson.wordpress.com/category/oracle-automatic-memory-management/

Similar Messages

  • Windows 7 - Large Pages

    While I was performing some benchmarks on my W520, I became aware that there is a function in Windows 7 called Large Pages. Essentially setting this policy for either a single user or a group greatly reduces the TLB overhead when translating memory addresses for applications in storage. The normal page size is 4KB. Large Pages sets the page size to be 2MB. The smaller number was useful when there was only a relatively small physical memory space available in the system (Windows 95, etc). However, as the addressable physical page space becomes larger, the overhead for translating addresses across page boundaries starts to be significant. Linux has an equivalent function.
    Here's a screenshot of where the setting (Lock pages in memory) is located:
     <----------------
    The memory bandwidth benchmark using SiSoftware Sandra 2012 showed a performance increase of 2.04% for normal operations  and a 2.9% increase for floating point operations. This was with only one user enabled. Enabling all users in the system brought an additional .5% performance increase. PCMARK7 also showed a corresponding increase in benchmark performance numbers.
    Thanks to Huberth for pointing me into the SiSoftware Sandra 2012 benchmarking software and the memory bandwidth warning.
    This is an extract from a memory bandwidth benchmark run:
    Integer Memory Bandwidth
    Assignment : 16.91GB/s
    Scaling : 17GB/s
    Addition : 16.75GB/s
    Triad : 16.72GB/s
    Data Item Size : 16bytes
    Buffering Used : Yes
    Offset Displacement : Yes
    Bandwidth Efficiency : 80.36%
    Float Memory Bandwidth
    Assignment : 16.91GB/s
    Scaling : 17GB/s
    Addition : 16.73GB/s
    Triad : 16.74GB/s
    Data Item Size : 16bytes
    Buffering Used : Yes
    Offset Displacement : Yes
    Bandwidth Efficiency : 80.34%
    Benchmark Status
    Result ID : Intel Core (Sandy Bridge) Mobile DRAM Controller (Integrated Graphics); 2x 16GB Crucial CT102464BF1339M16 DDR3 SO-DIMM (1.33GHz 128-bit) PC3-10700 (9-9-9-24 4-33-10-5)
    Computer : Lenovo 4270CTO ThinkPad W520
    Platform Compliance : x64
    Total Memory : 31.89GB
    Memory Used by Test : 16GB
    No. Threads : 4
    Processor Affinity : U0-C0T0 U2-C1T0 U4-C2T0 U6-C3T0
    System Timer : 2.24MHz
    Page Size : 2MB
    W520, i7-2820QM, BIOS 1.42, 1920x1080 FHD, 32 GB RAM, 2000M NVIDIA GPU, Samsung 850 Pro 1TB SSD, Crucial M550 mSata 512GB, WD 2TB USB 3.0, eSata Plextor PX-LB950UE BluRay
    W520, i7-2760QM, BIOS 1.42 1920x1080 FHD, 32 GB RAM, 1000M NVIDIA GPU, Crucial M500 480GB mSata SSD, Hitachi 500GB HDD, WD 2TB USB 3.0

    What kind of software do you use for the conversion to pdf? Adobe Reader can't create pdf files.

  • How do I set margins in Numbers, where did page set up go?

    Since I have upgraded to Mavrick, everything has changed in Pages and Numbers, the main reason I left Microsoft was because of the constantly changing software with each update.  The Inspector is gone and in Numbers the left hand column listing all my tables has disappeared/changed to a row which I have to continuously scroll through.  Arrrrgh!  Is there any way to get that back?  Also . . .   Can anyone tell me where I can find the Page set up or set margins in Numbers? 

    Hi 'cat,
    It appears te main push behind the latest 'upgrade'to the iWork applications was to provide compatibility/similarity between the iOS versions and the Mac versions. A lot of features were lost in the change, some of which have made theri way back into Numbers 3, etc., others that are scheduled to be brought back, and still others for which there's been no indication.
    Inspector functions are now handled via the formatting button (paint brush), as well as the button bar that appears when a cell is selected.
    With the greater focus on iOS, features pertaining to printing to paper have been largely forgotten. There's no way I'm aware of to set document margins in  Numbers 3.
    The features you mention are still available in Numbers 2.3, which will, if you had it installed earlier, still be on your Mac. You'll find it in a folder named iWork '09, located in the Applications folder.
    Any document you've opened in and saved from Numbers 3 will need to be re-opened in N3 and Exported as a Numbers '09 document before it can be opened in Numbers '09, and may be missing some features not supported in Numbers 3.
    While its probably not a long-term solution, returnng to Numbers '09 may provide an interim fix while waiting for N3 to get the updates you need.
    Meantime, add your voice to the rest requesting the reinstatement of any features you consider essential. Use the Provide Numbers Feedback item in the Numbers menu to make your request.
    Regards,
    Barry

  • I have 10.6.8 and have installed two new printers HP 8610 and an Epson 7880 and can not find the Page Set-up menu anywhere in the applications I a trying to print from. There are no page sizes, paper types, appearsto be locked on a 13x19 size but It.

    Hi to the Mac Folks,
    I have 10.6.8 and have installed two new printers HP 8610 and an Epson 7880 and can not find the Page Set-up menu anywhere in the applications I a trying to print from. This is regardless of either printer selected.
    I primarily print photos from Photoshop CS5.  The term Page Set-up has gone missing in the file pull down menu. Can't make any choices  There are no page sizes, paper types, appearsto be locked on a 13x19 size paper format. Either being too large or too small.
    Saw a 2008 locked issue about this however none of the help fit my situation, options discussed are not available to me.
    Preview has no "Page Setup" - or does it?
    Does the constant struggle with computer compatability weirdness issues ever end or is it a enslavement scheme?
    Woody

    Hey,
    if you know the name(s) of the root folder(s) you want to access (eg. by making a note on the Windows side) then you can make them appear by just doing Shift-Command-G in Finder ("Go to Folder…"), and entering the full path to the required folders.  You can then navigate all the contents as normal.
    HTH,
    S.

  • Multiple Signatures on Large Plan Sets? - Open to Other Security Measures

    Good afternoon,
    I'm looking for a way to circulate large plan sets to three reviewers for digital signature. The PDFs are, and must remain, formatted to clearly print at 24''x36''. As some plan sets are 12+ pages, this is too large to pass along through our email.
    We do our plan review through an online portal, where each department marks up the PDF, exports comments as FDF, and then the case manager imports all comments into one PDF. I'd prefer to find a solution for signing plans using this portal too, but I imagine it will have to be done offline and then uploaded.
    I'm also open to any other solutions for locking and securing an approved document. If not a signature, maybe a timestamp or watermark on each page? Anything more secure than a simple stamp that can so easily be altered.

    Good afternoon,
    I'm looking for a way to circulate large plan sets to three reviewers for digital signature. The PDFs are, and must remain, formatted to clearly print at 24''x36''. As some plan sets are 12+ pages, this is too large to pass along through our email.
    We do our plan review through an online portal, where each department marks up the PDF, exports comments as FDF, and then the case manager imports all comments into one PDF. I'd prefer to find a solution for signing plans using this portal too, but I imagine it will have to be done offline and then uploaded.
    I'm also open to any other solutions for locking and securing an approved document. If not a signature, maybe a timestamp or watermark on each page? Anything more secure than a simple stamp that can so easily be altered.

  • Displaying large result sets in Table View u0096 request for patterns

    When providing a table of results from a large data set from SAP, care needs to be taken in order to not tax the R/3 database or the R/3 and WAS application servers.  Additionally, in terms of performance, results need to be displayed quickly in order to provide sub-second response times to users.
    This post is my thoughts on how to do this based on my findings that the Table UI element cannot send an event to retrieve more data when paging down through data in the table (hopefully a future feature of the Table UI Element).
    Approach:
    For data retrieval, we need to have an RFC with search parameters that retrieves a maximum number of records (say 200) and a flag whether 200 results were returned. 
    In terms of display, we use a table UI Element, and bind the result set to the table.
    For sorting, when they sort by a column, if we have less than the maximum search results, we sort the result set we already have (no need to go to SAP), but otherwise the RFC also needs to have sort information as parameters so that sorting can take place during the database retrieval.  We sort it during the SQL select so that we stop as soon as we hit 200 records.
    For filtering, again, if less than 200 results, we just filter the results internally, otherwise, we need to go to SAP, and the RFC needs to have this parameterized also.
    If the requirement is that the user must look at more than 200 results, we need to have a button on the screen to fetch the next 200 results.  This implies that the RFC will also need to have a start point to return results from.  Similarly, a previous 200 results button would need to be enabled once they move beyond the initial result set.
    Limitations of this are:
    1.     We need to use custom RFC function as BAPI’s don’t generally provide this type of sorting and limiting of data.
    2.     Functions need to directly access tables in order to do sorting at the database level (to reduce memory consumption).
    3.     It’s not a great interface to add buttons to “Get next/previous set of 200”.
    4.     Obviously, based on where you are getting the data from, it may be better to load the data completely into an internal table in SAP, and do sorting and filtering on this, rather than use the database to do it.
    Does anyone have a proven pattern for doing this or any improvements to the above design?  I’m sure SAP-CRM must have to do this, or did they just go with a BSP view when searching for customers?
    Note – I noticed there is a pattern for search results in some documentation, but it does not exist in the sneak preview edition of developer studio.  Has anyone had in exposure to this?
    Update - I'm currently investigating whether we can create a new value node and use a supply function to fill the data.  It may be that when we bind this to the table UI element, that it will call this incrementally as it requires more data and hence could be a better solution.

    Hi Matt,
    i'm afraid, the supplyFunction will not help you to get out of this, because it's only called, if the node is invalid or gets invalidated again. The number of elements a node contains defines the number of elements the table uses for the determination of the overall number of table rows. Something quite similar to what you want does already exist in the WD runtime for internal usage. As you've surely noticed, only "visibleRowCount" elements are initially transferred to the client. If you scroll down one or multiple lines, the following rows are internally transferred on demand. But this doesn't help you really, since:
    1. You don't get this event at all and
    2. Even if you would get the event, since the number of node elements determines the table's overall rows number, the event would never request to load elements with an index greater than number of node elements - 1.
    You can mimic the desired behaviour by hiding the table footer and creating your own buttons for pagination and scrolling.
    Assume you have 10 displayed rows and 200 overall rows, What you need to be able to implement the desired behaviour is:
    1. A context attribute "maxNumberOfExpectedRows" type int, which you would set to 200.
    2. A context attribute "visibleRowCount" type int, which you would set to 10 and bind to table's visibleRowCount property.
    3. A context attribute "firstVisibleRow" type int, which you would set to 0 and bind to table's firstVisibleRow property.
    4. The actions PageUp, PageDown, RowUp, RowDown, FirstRow and LastRow, which are used for scrolling and the corresponding buttons.
    The action handlers do the following:
    PageUp: firstVisibleRow -= visibleRowCount (must be >=0 of course)
    PageDown: firstVisibleRow += visibleRowCount (first + visible must be < maxNumberOfExpectedRows)
    RowDown/Up: firstVisibleRow++/-- with the same restrictions as in page "mode"
    FirstRow/LastRow is easy, isn't it?
    Since you know, which sections of elements has already been "loaded" into the dataSource-node, you can fill the necessary sections on demand, when the corresponding action is triggered.
    For example, if you initially display elements 0..9 and goto last row, you load from maxNumberOfExpected (200) - visibleRows (10) entries, so you would request entries 190 to 199 from the backend.
    A drawback is, that the BAPIs/RFCs still have to be capable to process such "section selecting".
    Best regards,
    Stefan
    PS: And this is meant as a workaround and does not really replace your pattern request.

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • Oracle 10gR2 LARGE PAGE SIZE on Windows 2008 x64 SP2

    Hello Oracle Experts,
    What are the advantages of Large Page Size and how would I know when my DB will benefit from Large Page Sizes?
    My undeqrstanding is on Windows x64 – 8kb default page size – will now be 2 MB. Will this speed up accesses to buffer cache? If so is there a latch wait that I can monitor before vs. after to verify that large page size has improved performance?
    My Database server has 256GB RAM and SGA is set to 180GB. I am quite sure the overhead involved in maintaining a large number of 8kb allocations (as opposed to 2MB) must be high - how can i monitor this?
    I am planning to follow the procedure here:
    http://download.oracle.com/docs/html/B13831_01/ap_64bit.htm#CHDGFJJD
    The DB is for SAP on a 8CPU/48 core IBM x3950. For some reason SAP documentation does not mention anything about this registry setting or even if Large Page Size is supported in the SAP world.
    Part 2 : I notice that more recent Oracle patch sets (example 25) turn NUMA OFF by default. Why is this and what is the impact of disabling NUMA on a system like x3950 (which is a NUMA based server)?
    My understanding is Oracle would no longer know that some memory is Local (and therefore fast) and some memory is Remote (therefore slow). Overall I am guessing this could cause a real performance issue on this DB.
    -points for detailed reply!
    thanks a lot -

    Hello
    Thanks for your reply. I am very interested to hear further about the limitations of Windows 2008 and the benefits of Oracle Linux.
    Generally we find that Windows 2008 has been pretty good, a big improvement over Windows 2003 (bluescreens don't occur ever etc)
    Can you advise further about Large Page Size for the buffer cache? I assume this applies on both Windows and Linux (I am guessing there is a similiar parameter for 10gR2 on Linux).
    SAP have not yet fully supported Oracle 11g so this is why 11g has not made it into the SAP world yet.
    Can you also please advise about NUMA? regardless of whether we run Linux or Windows this setting needs to be considered.
    Thanks

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • Full text index searching in large document sets

    I have been placed in charge of a digital PDF document library for a small biotech company. The library consists of about 1000 100-300 page .pdf documents which have been scanned and OCRed. In order to facilitate the full text searching of the documents a PDX catalog has been created. In theory, the PDX catalog would seem to be an excellent means of quickly accessing the data, but due the sheer volume of text that is contained in the documents this does not seem to be the case.
    Any given search may take hours to complete and many computers in the department have been known to lock up due to the load of running a search. Obviously, this has made using the PDX search more of a hassle than it is worth.
    I do not know exactly how the index searches work, but from what I gather they somehow search within each document in turn and return to you all the instances in all the documents that contain a certain term. If this is the case, than it would make sense that the searches would take a long time because the search would have to search each of the 1000 documents in sequence.
    The thing is: we really do not need to know the context and placement of every instance that a word appears in a document. All we need to know is IF it appears, and perhaps how many times. Is there a way to make an index that will simply give us this information without having to search the actual document?
    Heres an example of what I am trying to achieve:
    Note: I know almost nothing about full text indexes so please forgive me if any of this sounds insane
    Lets say we have a document called "word count.pdf" which contained the following text:
    "blah blah yadda yadda text Recombinant human insulin more text still texting and so on"
    And another called "word count 2.pdf" with the following text
    "Recombinant human insulin and la la la dee do"
    The indexes for these files could be condensed and stored like this:
    "Word count.pdf"
    Blah 2
    yadda 2
    recombinant 1
    human 1
    insulin 1
    text 2
    texting 1
    and 1
    so 1
    on 1
    "Word count 2.pdf"
    recombinant 1
    human 1
    insulin 1
    and 1
    la 3
    dee 1
    do 1
    In this example, if we were to run a search on "text" the index would return "word count.pdf, 3 instances (2 of text and 1 of texting" whereas if we were to search for "recombinant" it would return both "word count.pdf, 1 instance" and "word count 2.pdf, 1 instance".
    This way, I could quickly weed out all documents that do not have the word that I am looking for and get an idea about which documents should be searched more in depth without scanning every single instance of the term in every document.
    Is there any way to accomplish something similar to this using acrobat? (Or anything else, for that matter)
    My specifications: (similar to specs of all computers searching the pdx):
    Windows XP,
    intel celeron CPU 2.6GHz, 1G of ram
    Adobe Acrobat 8 Professional

    Look at dTSearch. We used the publisher version for a CD with large files sets (with hundreds of pages per file/thousands of PDF pages of multicolumn index data - text heavy), and it does a great job. The desktop version would provide the type of searching you are looking for. Indexing is also very fast. Our customer complained, like yourself, about the speed of searches in Acrobat 6 and higher - most of the delay is due to the population of the results window.
    http://www.dtsearch.com/

  • RAC  windows 2003 64bit xeon  - Large Pages

    Hi all
    i have 2 node (windows 2003 64bit dual core xeon, 8GB RAM)
    oracle recommendet use large pages on 64bit insted LOCK_SGA, but when i use large pages and i set my sga_target=5GB after few minute in EM i see alert (stronicowanie virtual memory) i dont knew how i can write this, mayby swapping.
    how can avoid this?
    how i can check that oracle use large pages?
    meyby some intresting links?
    thanks to advice

    You don't run a scalability option on a basically unscabable O/S like Winblows, do you?
    Sybrand Bakker
    Senior Oracle DBA

  • How to use large pages in AIX with oracle

    Hi,
    i'm trying to convince oracle to use large pages on AIX 5.3 but haven't
    suceeded so far.
    i set v_pinshm=1, maxpin%=80, lgpg_regions=448 and lgpg_size=16777216
    using 'vmo' and 'LOCK_SGA=true' in spfile. after rebooting and starting
    the instance 'svmon' shows no no large pages in use:
    # svmon
    size inuse free pin virtual
    memory 4046848 3711708 335140 2911845 1503108
    pg space 524288 5551
    work pers clnt lpage
    pin 1076604 0 233 1835008
    in use 1503010 0 373690 0
    pgsize size free
    lpage pool 16 MB 448 448
    SGA size is 3G. why doesn't oracle use large pages? i already have
    created a TAR but maybe an oracle-on-AIX expert can help me faster than
    oracle support :)
    regards,
    -ap

    Nice day Andreas,
    please do you have some solution for your issue, because I think I have similar problem,
    I set on AIX
    vmo -r -o lgpg_regions=192 -o lgpg_size=16777216
    vmo -o v_pinshm=1
    I see in svmon -G this output
    PageSize PoolSize inuse pgsp pin virtual
    s 4 KB - 1104431 2456 685141 874874
    L 16 MB 192 0 0 192 0
    these (16MB large pages) are free but when I want run BEA WebLogic server run with parametr -Xlp the application is run with standard 4K memory pages,
    thank you very much for any other hint,

  • Error: Java HotSpot(TM) 64-Bit Server VM warning: JVM cannot use large page

    Hi,
    i recently came across Error Message (coming up in webadmin Log view):
    ProcessMonitor: Java HotSpot(TM) 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory
    when running Oracle NoSQL on Windows 7 64bit Home Premium system, having 8GB of physical RAM.
    I created the store configuration without explicitly passing a value for parameter memory_mb (was set to -memory_mb 0), so that replication group would take as much
    memory as possible, which found it's reflection in following line from store log:
    Creating rg1-rn1 on sn1 haPort=tangel-lapp:5.011 helpers=tangel-lapp:5011 mountpoint=null heapMB=7.007 cacheSize=5.143.454.023 -XX:ParallelGCThreads=4
    I was a bit surprised because of the fact that i definetly succeeded in running kvstore in this configuration, leading to store using 7007MB of memory.
    It was only possible to run kvstore when creating store configuration with memory configured to be less than 2GB.
    After searching google with error message mentioned above i came across some hints regarding activation of Huge Pages on Windows 7, which mentioned that it could be
    done on systems having at least Windows 7 Professional Edition.
    But finally i found a more helpful hint, referring to size of windows pagefile. As my machine uses an SSD as system disk and there some notes on deactivating pagefile when
    using an SSD, i did so some time ago.
    So i activated the pagefile on windows again and after doing so, the store came up without any problems.
    Maybe it's nothing new to some of you guys, but as there was nothing to be read about this fact in neither admin nor getting started guide, i just wanted to share this
    piece of information with you.
    Regards
    Thomas

    Charles,
    thanks for clarification, obviously i've overseen that in chapter "Installation prerequisites" of
    Admin Guide.
    By the way, are there any specific OS related settings, let's say "best practice" similar as it is
    referred to in documentation of "classic" Oracle Databases?
    Regards
    Thomas

  • Using hugepages (Large Pages)

    I am trying to leverage this feature in TimesTen v7.0.5 running SUSE Linux v 2.6.5-7.244-smp (gcc version 3.3.3) but with no success.
    I enabled hugepages in Linux:
    vm.nr_hugepages=1024
    vm.disable_cap_mlock=1
    cat /proc/meminfo shows:
    HugePages_Total: 1024
    HugePages_Free: 1024
    Hugepagesize: 2048 kB
    I followed the instructions and set -linuxLargePageAlignment in the daemon options file (ttendaemon.options):
    -linuxLargePageAlignment 2
    However when I start TimesTen, first the daemon and then manual load the database in Ram, I did not see it use any hugepages. cat /proc/meminfo still shows HugePages_Free: 1024.
    I also set /etc/security/limits.conf file properly. However, it differs with documentation is that I did not set /proc/sys/vm/hugetlb_shm_group since I have a older Linux kernel that hasn't support this param. Instead, I set vm.disable_cap_mlock = 1.
    I appreciate any help you can offer.
    Thanks,
    Wing

    Hi Brian,
    Use of large pages does not impact other parameters within TimesTen; you don't need to change anything with regard to stuff like hash index pages etc. since these are TimesTen 'internal' pages not O/S memory pages. I also do not think use of large pages impacts other kernel parameters such as shmall (I don't recall ever havign to change those when using large pages).
    In order for TimesTen to use large pages several things are required:
    1. The kernel must be configured to enable a suitable number of large pages (vm/nr_hugepages). The way you are doing this is fine but obviously for 'production' use you would configure this in /etc/sysctl.conf (or equivalent)
    2. The TT daemon must be told to use large pages via the -largePageALignment option (you are doing that too).
    3. The number of large pages configured must be large enough to contain the entire TT datastroe segment (PermSize+TempSize+LogBuffSize+~20 Mb). You can see the exact size of the segment via 'ipcs -m'.
    4. The kernel parameter vm/hugetlb_shm_group must be set to the O/S groupid which the user under which the TT main daemon is executing is a member of. For example, I havea group called timesten (gid = 1000) and the Tt instance administrator user is a member og that group. I have vm.hugetlb_shm_group = 1000.
    With all this in place my datastores use large pages with no problem (as confirmed by cat /proc/meminfo after the datastore is loaded). This is all you need to do, nothing more.
    Chris

  • Some large pages are displayed without scroll bars

    I've had a number of cases recently when large pages are displayed without scroll bars. Safari handles these pages properly.
    == URL of affected sites ==
    http:////www.sears.com/shc/s/c_10153_12605_Tools_Tool Sets?adCell=WF

    Hello Charles.
    You may be having a problem with some Firefox add-on that is hindering your Firefox's normal behavior. Have you tried disabling all add-ons (just to check), to see if Firefox goes back to normal?
    Whenever you have a problem with Firefox, whatever it is, you should make sure it's not coming from one of your installed add-ons, be it an extension, a theme or a plugin. To do that easily and cleanly, run Firefox in [http://support.mozilla.com/en-US/kb/Safe+Mode safe mode] and select ''Disable all add-ons''. If the problem disappears, you know it's from an add-on. Disable them all in normal mode, and enable them one at a time until you find the source of the problem. See [http://support.mozilla.com/en-US/kb/Troubleshooting+extensions+and+themes this article] for information about troubleshooting extensions and theme. You can troubleshoot plugins the same way.
    If you want support for one of your add-ons, you'll need to contact its author.

Maybe you are looking for

  • Condition Period Problem in Sales order -  Need help

    Hi Gurus I am testing Sales Order back dated Condition period Problem in Sales order example for you to understand Sales Order where PO date is 17.12.2007 Del Date & Price date is 13.02.2008 Then it is not calculating IN: A/R BED %IN A/R BED totalN A

  • Query variable for InfoObject with texts

    Hello experts, In the BW I have an InfoObject of type NUMC 19 that stores tasks master data and does have texts, we assume that the InfoObject is called Z_TASK. The texts represent the multi lingual task names. I need to build up a query that filters

  • Nokia 5233 app problem

    i hve a nokia 5233 n i m not able to install my app it asks to install i choose but then nothing happens n the app is not installed i think my files r deleted bcoz 10 days bfore all the apps were being installed need ur hlp nokia  if the files r dele

  • Authentication of users for an applicaion..

    Hi.. I created some users in a table..and I want only those users created by me to access the application..Is this possible..If yes.How should i do it?..If no.. is there any other method to authenticate. Regards, rags

  • How can a person deliver an imessage to a dead iphone?

    I was texting someone and my iphone 5 died. They were still able to send me imessages and it say delievered and when my phone turnes on the next morning i saw were the messages had been recieved at a time when my iphone was dead. How did this happen?