Total Shared Global Region in Large Pages = 0 KB (0%)
Hi ,
Am working on Oracle Database 11.2.0.3 ,
Application 12.1.3.
i see this message in the alert log file :
****************** Large Pages Information *****************
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 32 MB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total Shared Global Region size is 12 GB. For optimal performance,
prior to the next instance restart increase the number
of unused Large Pages by atleast 6145 2048 KB Large Pages (12 GB)
system wide to get 100% of the Shared
Global Region allocated with Large pages
What should i do ?
Thanks
You definitely are not using hugepagesd. That's what the message you mentioned above is telling you:
Total Shared Global Region in Large Pages = 0 KB (0%)It very clearly tells you that you have 0KB or 0% is in large pages.
Note that the terms "large pages" and "hugepages" are synonymous. In Linux, they're called hugepages.
Also, at the O/S level, you can do:
cat /proc/meminfoTo see how many hugepages are allocated/free/reserved.
Hope that helps,
-Mark
Similar Messages
-
Hello all,
We recently created a new 11.2.0.3 database on Red Hat Linux 5.7. It's running in ASMM
Database settings
sga_target set to 10G
sga_max=12G
memory_target=0.
memory_max=0
pga_aggregate_target =12G.
Host Specs
Total RAM = 128GB
Total CPUs = 4 @ 2.27GHz
Cores/CPU = 8
During instance startup, we get the following message.
****************** Large Pages Information *****************
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB) (alloc incr 32 MB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total Shared Global Region size is 12 GB. For optimal performance,
prior to the next instance restart increase the number
of unused Large Pages by atleast 6145 2048 KB Large Pages (12 GB)
system wide to get 100% of the Shared
Global Region allocated with Large pages
Has anyone seen this recommendation message during startup and acted upon it? if yes, what kind of modification was performed.
Thanks for your time.From 11.2.0.2 new parameter was added i.e use_large_pages, now whenever the database instance is started it will check for huge pages configuration and there for produce the warning if Oracle will allocate part of the SGA with hugepages and the resting part with normal 4k pages.
USE_LARGE_PAGES parameter has three possible values "true" (default), "only", and "false".
The default value of "true" preserves the current behavior of trying to use hugepages if they are available on the OS. If there are not enough hugepages, small pages will be used for SGA memory.
This may lead to ORA-4031 errors due to the remaining hugepages going unused and more memory being used by the kernel for page tables.
Setting it to "false" means do not use hugepages
A setting of "only" means do not start up the instance if hugepages cannot be used for the whole memory (to avoid an out-of-memory situation).
There is not much written about this yet, but i'm able to find some docs in metalink and from blogs. Hope this help.
Large Pages Information in the Alert Log [ID 1392543.1]
USE_LARGE_PAGES To Enable HugePages In 11.2 [ID 1392497.1]
NOTE:361323.1 - HugePages on Linux: What It Is... and What It Is Not...
Bug 9195408 - DB STARTUP DOES NOT CHECK WHETHER HUGEPAGES ARE ALLOCATED- PROVIDE USE_HUGEPAGES
http://agorbyk.wordpress.com/2012/02/19/oracle-11-2-0-3-and-hugepages-allocation/
http://kevinclosson.wordpress.com/category/use_large_pages/
http://kevinclosson.wordpress.com/category/oracle-automatic-memory-management/ -
Large number of Hide/Show regions on a page, can't hide on page load
I have a large number of Hide/Show regions on a page, 14 right now to be exact. I want all of these regions to start hidden when the page loads. In order to do this I have the following code in the onload page html body attribute:
onLoad="document.getElementById('region3').style.display = 'none';
document.getElementById('shIMG3').src = '/i/htmldb/builder/rollup_plus_dgray.gif'
document.getElementById('region19').style.display = 'none';
document.getElementById('shIMG19').src ='/i/htmldb/builder/rollup_plus_dgray.gif'"
This works fine when I have 13 or fewer hide/show regions on the page. When I add the 14th region to the page, all the regions on the page start off as not hidden.
Anyone have any idea why this could be happening? (I'm using Apex version 2.0)
Thanks
- Brianno ideas?
-
How can I decouple the pagination for a global region?
I was having a delightful time using a common region defined on a global page when I determined that pagination setting was begin carried from one page to the next. Arghhhh! So for example, if I have paged to the second set (page) of rows (11-20) on Page 1 and then I go to Page 2, the second set (page) of rows (11-20) is displayed there. If I go to a page which only has a first set of rows (1-10), I get the pagination error "Invalid set of rows requested, the source data of the report has been modified. Reset Pagination". And when I click to reset, it just repeats the error. [I suppose it tries to display the second set or rows (11-20) again -- which doesn't exist. What's that saying about insanity?]
How can I decouple the pagination for a global region? I want it to operarate just as it would if it were not sharing a common region. So if I'm looking at rows (11-20) on page 1, I can go to any other page beginning with rows 1-10 there. Then return to page 1 where I left off with rows 11-20 displayed. One solution is NOT to paginate but that's not my preferred solution.
HowardHoward(...inTraining) wrote:
I was having a delightful time using a common region defined on a global page when I determined that pagination setting was begin carried from one page to the next. Arghhhh! So for example, if I have paged to the second set (page) of rows (11-20) on Page 1 and then I go to Page 2, the second set (page) of rows (11-20) is displayed there. If I go to a page which only has a first set of rows (1-10), I get the pagination error "Invalid set of rows requested, the source data of the report has been modified. Reset Pagination".
The fact that there are different numbers of rows returned on different pages implies that the reports have some local page dependencies, so why try to use a global component? What's the actual requirement? How many pages does the report have to appear on? (Please say it is a report and not a tabular form...)
How can I decouple the pagination for a global region? I want it to operarate just as it would if it were not sharing a common region.
The point is that a global region is just that: a single region that happens to be displayed on multiple pages. It does not create multiple instances of a region on different pages. (Specifically, a region has a single region ID, and this is used to reference it whether it appears on one page or all of them. The region ID is used by the report for the purposes of AJAX refresh, pagination etc.)
A similar situation was discussed a long time ago. I'm rather surprised that Scott regarded it as a bug: the fact that it doesn't seem to have been "fixed" or have a bug number attached may indicate that the others on the APEX team disagreed with him? I haven't tried the workaround he suggested, however I don't think it's likely to be prove a useful line of attack for your issue, as (1) it resets pagination rather than preserving it; and (2) it doesn't appear to be compatible with the AJAX PPR pagination used in more recent versions of APEX.
I can't see any straightforward "solution" (largely because I don't think there's really a problem: the exhibited behaviour is exactly how I expect/want global regions to behave). Pagination processing is undocumented. The current 4.2 apex.widget.report.paginate JS method is specifically annotated as "for internal use only". Search the forum for custom pagination techniques. Messy looking hacks for IRs have previously been suggested.
So if I'm looking at rows (11-20) on page 1, I can go to any other page beginning with rows 1-10 there. Then return to page 1 where I left off with rows 11-20 displayed. One solution is NOT to paginate but that's not my preferred solution.
Assuming that there aren't too many pages involved, the other obvious option is to create unique regions on the required pages. You can achieve some level of reusability by creating SQL Query (PL/SQL function body returning SQL query) reports based on an external function so that there's only a single SQL source to be maintained.
Explain the requirement in more detail. Pagination is not the only option for reducing the quantity of displayed information. Often it's better to display some of all of the data, rather than all of some of it... -
I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
# uname -a
SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
and
"Taming Your Emu to Improve Application Performance (February 2004)"
http://www.sun.com/blueprints/0204/817-5489.pdf
The machine claims it supports 4M page sizes:
# pagesize -a
8192
65536
524288
4194304
I've written a very simple program:
main()
int sz = 10*1024*1024;
int x = (int)malloc(sz);
print_info((void**)&x, 1);
while (1) {
int i = 0;
while (i < (sz/sizeof(int))) {
x[i++]++;
I run it specifying a 4M heap size:
# ppgsz -o heap=4M ./malloc_and_sleep
address 0x21260 is backed by physical page 0x300f5260 of size 8192
pmap also shows it has an 8K page:
pmap -sx `pgrep malloc` | more
10394: ./malloc_and_sleep
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
00010000 8 8 - - 8K r-x-- malloc_and_sleep
00020000 8 8 8 - 8K rwx-- malloc_and_sleep
00022000 3960 3960 3960 - 8K rwx-- [ heap ]
00400000 6288 6288 6288 - 8K rwx-- [ heap ]
(The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
I'm running this as root.
In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
Here's the output from sysinfo:
General Information
Host Name is machinename.lucent.com
Host Aliases is loghost
Host Address(es) is xxxxxxxx
Host ID is xxxxxxxxx
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
Manufacturer is Sun (Sun Microsystems)
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
System Model is Blade 1000
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
ROM Version is OBP 4.10.11 2003/09/25 11:53
Number of CPUs is 2
CPU Type is sparc
App Architecture is sparc
Kernel Architecture is sun4u
OS Name is SunOS
OS Version is 5.9
Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
Kernel Information
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
SysConf Information
Max combined size of argv[] and envp[] is 1048320
Max processes allowed to any UID is 29995
Clock ticks per second is 100
Max simultaneous groups per user is 16
Max open files per process is 256
System memory page size is 8192
Job control supported is TRUE
Savid ids (seteuid()) supported is TRUE
Version of POSIX.1 standard supported is 199506
Version of the X/Open standard supported is 3
Max log name is 8
Max password length is 8
Number of processors (CPUs) configured is 2
Number of processors (CPUs) online is 2
Total number of pages of physical memory is 262144
Number of pages of physical memory not currently in use is 4368
Max number of I/O operations in single list I/O call is 4096
Max amount a process can decrease its async I/O priority level is 0
Max number of timer expiration overruns is 2147483647
Max number of open message queue descriptors per process is 32
Max number of message priorities supported is 32
Max number of realtime signals is 8
Max number of semaphores per process is 2147483647
Max value a semaphore may have is 2147483647
Max number of queued signals per process is 32
Max number of timers per process is 32
Supports asyncronous I/O is TRUE
Supports File Synchronization is TRUE
Supports memory mapped files is TRUE
Supports process memory locking is TRUE
Supports range memory locking is TRUE
Supports memory protection is TRUE
Supports message passing is TRUE
Supports process scheduling is TRUE
Supports realtime signals is TRUE
Supports semaphores is TRUE
Supports shared memory objects is TRUE
Supports syncronized I/O is TRUE
Supports timers is TRUE
/opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
Device Information
SUNW,Sun-Blade-1000
cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
Does anyone have any idea as to what the problem might be?
Thanks in advance.
MikeI ran your program on Solaris 10 (yet to be released) and it works.
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
00010000 8 8 - - 8K r-x-- mm
00020000 8 8 8 - 8K rwx-- mm
00022000 3960 3960 3960 - 8K rwx-- [ heap ]
00400000 8192 8192 8192 - 4M rwx-- [ heap ]
I think you don't this patch for Solaris 9
i386 114433-03
sparc 113471-04
Let me know if you encounter problem even after installing this patch.
Saurabh Mishra -
Hi
I have RHEL 6.4 with 128GB RAM
I have big database
The database is the only service on this box.
what is the optimal number of large pages I can have in the box ?
(to not disturb the OS )
Tanks for your help> I have RHEL 6.4 with 128GB RAM
> I have big database
> The database is the only service on this box.
> what is the optimal number of large pages I can have in the box ?
Depends what you want.
Based on the information you have supplied, I would recommend a HugePages value between 0 GB and 127 GB.
How large is your application?
What kind of application is it?
What is the locality in its data access? Sequential? Random?
How long should any cached data be kept in memory just in case it is needed again?
How much data does the application need to read and write?
The DB install guide has some recommendations on calculating the size of the shared global area (SGA); use them.
Do not enable Automatic Memory Management (AMM) because AMM is just a poor-man's HugePages.
Does your DB use many table joins? Few joins but lots and lots of data rows?
Remember your SGA must fit entirely within the HugePages area else you will get massively-degraded performance.
I have RHEL 6.4 with 128GB RAM
I have big database
The database is the only service on this box.
what is the optimal number of large pages I can have in the box ? -
Swapping performance, and large pages
I am trying to run glpsol (the standalone solver from the GNU Linear Programming Kit) on a very large model. I don't have enough physical memory to fit the entire model, so I configured a lot of swap. glpsol, unfortunately, uses more memory to parse and preprocess the model than it does to actually run the core solver, so my approximately 2-3GB model requires 11GB of memory to get started. (However, much of this acess is sequential.)
What I am encountering is that my new machine, running Solaris 10 (11/06) on a dual-core Athlon (64-bit, naturally) with 2GB or memory, is starting up much, much more slowly than my old desktop machine, running Linux (2.6.3) on a single-core Athlon 64 with 1GB of memory. Both machines are using identical SATA drives for swap, though with different motherboard controllers. The Linux machine gets started in about three hours, while Solaris takes 9 hours or more.
So, here's what I've found out so far, and tried.
On Solaris, swapping takes place 1 page (4KB) at a time. You can see from this example iostat output that I'm getting about 6-7ms latency from the disk but that each of the reads is just 4KB. (629KB/s / 157 read/s = 4KB/read )
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 157.2 14.0 628.8 784.0 0.1 1.0 6.6 2 99
cmdk1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0Linux has a feature called page clustering which swaps in multiple 4KB pages at once--- currently set to 8 pages (32KB).
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
hda 1270.06 2.99 184.23 6.39 11635.93 76.65 61.45 1.50 7.74 5.21 99.28
hdc 0.00 0.00 0.40 0.20 4.79 1.60 10.67 0.00 0.00 0.00 0.00
md0 0.00 0.00 1.00 0.00 11.18 0.00 11.20 0.00 0.00 0.00 0.00
hdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00(11636 sectors/sec = 5818KB/sec. Divided by 184 reads/sec gives just under 32KB.)
I didn't find anything I could tune in the Solaris kernel that would increase the granularity at which pages are swapped to disk.
I did find that Solaris supports large pages (2MB on x64, verified with "pagesize -a"), so I modified glpsol to use larger chunks (16MB) for its custom allocator and used memalign to allocate these chunks at 2MB boundaries. Then I rebooted the system and ran glpsol with
ppgsz -o heap=2MB glpsol ...I verified with pmap -s that 2MB pages were being used, but only a very few of them.
8148: glpsol --cpxlp 3cljf-5.cplex --output solution-5 --log log-5
Address Bytes Pgsz Mode Mapped File
0000000000400000 116K - r-x-- /usr/local/bin/glpsol
000000000041D000 4K 4K r-x-- /usr/local/bin/glpsol
000000000041E000 432K - r-x-- /usr/local/bin/glpsol
0000000000499000 4K - rw--- /usr/local/bin/glpsol
0000000000800000 25556K - rw--- [ heap ]
00000000020F5000 944K 4K rw--- [ heap ]
00000000021E1000 4K - rw--- [ heap ]
00000000021E2000 68K 4K rw--- [ heap ]
00000000021F3000 4K - rw--- [ heap ]
00000000087C3000 4K 4K rw--- [ heap ]
00000000087C4000 2288K - rw--- [ heap ]
0000000008A00000 2048K 2M rw--- [ heap ]
0000000008C00000 2876K - rw--- [ heap ]
0000000008ECF000 480K 4K rw--- [ heap ]
0000000008F47000 4K - rw--- [ heap ]
000000003F4E8000 4K 4K rw--- [ heap ]
000000003F4E9000 5152K - rw--- [ heap ]
000000003F9F1000 60K 4K rw--- [ heap ]
000000003FA00000 2048K 2M rw--- [ heap ]
000000003FC00000 6360K - rw--- [ heap ]
0000000040236000 368K 4K rw--- [ heap ]
etc.There are only 19 large pages listed (a total of 38MB of physical memory.)
I think my next step, if I don't receive any advice, is to try to preallocate the entire region of memory which stores (most of) the model as a single allocation. But I'd appreciate any insight as to how to get better performance, without a complete rewrite of the GLPK library.
1. When using large pages, is the entire 2MB page swapped out at once? Or is the 'large page' only used for mapping in the TLB? The documentation I read on swap/paging and on large pages didn't really explain the interaction. (I wrote a dtrace script which logs which pages get swapped into glpsol but I haven't tried using it to see if any 2MB pages are swapped in yet.)
2. If so, how can I increase the amount of memory that is mapped using large pages? Is there a command I can run that will tell me how many large pages are available? (Could I boot the kernel in a mode which uses 2MB pages only, and no 4KB pages?)
3. Is there anything I should do to increase the performance of swap? Can I give a hint to the kernel that it should assume sequential access? (Would "madvise" help in this case? The disk appears to be 100% active so I don't think adding more requests for 4KB pages is the answer--- I want to do more efficient disk access by loading bigger chunks of data.)I suggest posting this to the opensolaris performance discussion group. Also it would be useful to know if the application is 32 bit or 64 binary.
-
How can I have a Pages document that is shared with me under my Pages section and not open it with the link?
Try this.
Open the document and select and copy a few pages, say ten pages.
Open a new blank document and paste the pages you copied into that.
Save it with a new name.
Work on those new pages to see if the problem has disappeared.
If this helps, continue breaking up the large file in to smaller chunks and working on them
You can of course later reverse the process and merge the files into one new one.
This suggestion is based on my experiences with large Word files. Breaking it up in to smaller chunks does two things. First, if there is any corruption in the old file, the new copies might escape that. Second, the Mac is faster handling the smaller chunks. -
BTF in a region of a page of UI Shell not refreshing
Hi All,
Jdev Version : 11.1.2.0.0
We are using a Bounded Task Flow (BTF) in a ADF Region of a page, which is made using the dynamic template UI Shell with replace-in-place method(Only one tab refreshes all time with new content).
We are refreshing/invoking the new content using:
TabContext tabContext = TabContext.getCurrentInstance();
try{
// tabContext.setMainContent("/WEB-INF/flows/task-calendar.xml#task-calendar");
tabContext.setMainContent("/WEB-INF/flows/task-list.xml#task-list", paramMap);
AdfFacesContext.getCurrentInstance().addPartialTarget(tabContext.getContentArea());
catch (TabContext.TabContentAreaDirtyException toe)
// TODO: warn user TabContext api needed for this use case.
In the BTF, We are on the first screen if we click on one command link , we are navigating to another region .If i click the side navigation bar that is in page template.I am able to naviagate that.But again i followed the same process and navigated to
that region.Second time , If i am clicking the command link on the Page template, it is not allowing me to navigate in first page of default activity of the task flow.I can say that region is refreshing.
We are using AdfFacesContext.getCurrentInstance().addPartialTarget(tabContext.getContentArea()); to refresh.
Please let me know how can i resolve the issueduplicate: Global link in BTF with region
-
Can I dynamically apply a page template or embed a region in multiple pages
I have some pages that I would like to be accessible in two ways:
1. as a popup (using the Popup template) in Edit mode
2. as a regular page (using the Application's default template) in New mode
I don't want to make a copy of each page as that is a maintenance headache that shouldn't be necessary. Is there a way to either:
1. Dynamically select the template when the page loads OR
2. Create a region that can be shared by two or more pages (like an asp "Include"). I looked for this option in my Shared Components and could not find it.
As always, creative solutions much appreciated!
SydneyHello,
I believe page 0 is what you are looking for. Please read more about it in here -
http://download-uk.oracle.com/docs/cd/B31036_01/doc/appdev.22/b28550/ui.htm#sthref1159
Regards,
Arie. -
Using large pages on Solaris 10
Hello,
I�ve some problems to use large pages ( 16k, 512k, 4M ) on two Primepower650 systems. I�ve installed the most actual kernel 127111-05.
The pagesize -a command respond 4 page sizes ( 8k, 16k, 512k, 4M ). Even if I try the old manual method using LD_PRELOAD=mpss.so.1 and a mpss.conf file to force large pages the pmap -sx <pid> shows only 8k for stack,heap and anon. Only for shared memory 4M DISM segments are used. I didn�t receive any error message. Two other primepower systems with the same kernel release works as expected.
What can I do for further troubleshooting ? I�ve tried different kernel settings all without effect.
Best regards
JCJThis problem is now ( paritially ) solved by the Fujitsu-Siemens edition of kernel patch 127111-08. The behaviour is now like Solaris 9 because large pages must be forced by LD_PRELOAD=mpss.so.1 and still didn�t work out of the box for this kind of cpu ( Sparc GP64 V only ). All available page sizes ( 8k, 64k, 512k and 4M ) can now be used by configuring the /etc/mpss.conf. Unfortunally large pages out-of-the-box are not working on this kind of cpu and the actual kernel patch. This is not so nice because on large systems with a lot of memory and a lot of large processes there may be still a lot of TLB misses. So I still wait and test further as soon as new kernel patches are available.
JCJ -
Shared global data in multi-plugin file
Ok, I see that in the SelectoramaShape example that it is possible to put multiple plug-ins in a single file...cool. My question: is global data visible/shareable between the plug-ins?
My situation is this: I have a visible filter plug-in ("Trigger"), a hidden persistent automation plug-in ("Master"), and a hidden filter plug-in ("Slave").
The idea is for Trigger to do the 'public' stuff (read/write scripting parameters, put up and manage the UI etc). Master listens for it to finish, does some things filters can't do (e.g. create an adjustment layer), and then commands Slave to do the heavy lifting (e.g. read from a specified pixel layer and write out to the mask of the newly-created adjustment layer). So far I have the three plug-ins doing pretty much nothing, all in the same .8bf file. What seems to work is that Master indeed sees Trigger run and successfully executes Slave. And all three seem to be able to read/write shared global data structures without causing BSODs or global warming.
My concern is that a plug-in is a DLL, the key word in that acronym being "dynamic": I'm worried that things will move around or whatever, and/or that I'm doing something blatantly illegal by not allocating/deallocating/registering/whatever and using multiple levels of abstraction to read/write a fistful of integer values. Is the fact that it seems to be working an artifact of having 16GB of RAM, so nothing ever moves? Does the "persistent" setting for Master keep it locked down? Or am I just totally overthinking this whole thing?
(This can't possibly be an original question, but the search function on this forum doesn't play well with me. Feel free to just post a link to a FAQ or something.)The C library is shareable. But you don't want it to be shared. That's your question summarized, isn't it?
You probably can't prevent it from being shared, so to prevent multiple use of it you would have to queue up the requests to be done one at a time. WynEaston's suggestion of having the servlet implement SingleThreadModel would help, but I believe the servlet spec allows servers to run multiple copies of a servlet that does that (as opposed to running a single copy in multiple threads).
Your other alternative is to rewrite the math in Java, or at least in some object-oriented language where you don't need global variables (which are the source of your problem). All right, I can already hear you saying "But that wouldn't be as fast!" Maybe not, but that isn't everything. Now you have a problem in queueing theory: do you want a single server that's fast, but jobs have to wait for it, or do you want multiple servers that aren't as fast, but jobs don't have to wait? That's a question you would have to evaluate based on the usage of your site, and it isn't an easy one. -
While I was performing some benchmarks on my W520, I became aware that there is a function in Windows 7 called Large Pages. Essentially setting this policy for either a single user or a group greatly reduces the TLB overhead when translating memory addresses for applications in storage. The normal page size is 4KB. Large Pages sets the page size to be 2MB. The smaller number was useful when there was only a relatively small physical memory space available in the system (Windows 95, etc). However, as the addressable physical page space becomes larger, the overhead for translating addresses across page boundaries starts to be significant. Linux has an equivalent function.
Here's a screenshot of where the setting (Lock pages in memory) is located:
<----------------
The memory bandwidth benchmark using SiSoftware Sandra 2012 showed a performance increase of 2.04% for normal operations and a 2.9% increase for floating point operations. This was with only one user enabled. Enabling all users in the system brought an additional .5% performance increase. PCMARK7 also showed a corresponding increase in benchmark performance numbers.
Thanks to Huberth for pointing me into the SiSoftware Sandra 2012 benchmarking software and the memory bandwidth warning.
This is an extract from a memory bandwidth benchmark run:
Integer Memory Bandwidth
Assignment : 16.91GB/s
Scaling : 17GB/s
Addition : 16.75GB/s
Triad : 16.72GB/s
Data Item Size : 16bytes
Buffering Used : Yes
Offset Displacement : Yes
Bandwidth Efficiency : 80.36%
Float Memory Bandwidth
Assignment : 16.91GB/s
Scaling : 17GB/s
Addition : 16.73GB/s
Triad : 16.74GB/s
Data Item Size : 16bytes
Buffering Used : Yes
Offset Displacement : Yes
Bandwidth Efficiency : 80.34%
Benchmark Status
Result ID : Intel Core (Sandy Bridge) Mobile DRAM Controller (Integrated Graphics); 2x 16GB Crucial CT102464BF1339M16 DDR3 SO-DIMM (1.33GHz 128-bit) PC3-10700 (9-9-9-24 4-33-10-5)
Computer : Lenovo 4270CTO ThinkPad W520
Platform Compliance : x64
Total Memory : 31.89GB
Memory Used by Test : 16GB
No. Threads : 4
Processor Affinity : U0-C0T0 U2-C1T0 U4-C2T0 U6-C3T0
System Timer : 2.24MHz
Page Size : 2MB
W520, i7-2820QM, BIOS 1.42, 1920x1080 FHD, 32 GB RAM, 2000M NVIDIA GPU, Samsung 850 Pro 1TB SSD, Crucial M550 mSata 512GB, WD 2TB USB 3.0, eSata Plextor PX-LB950UE BluRay
W520, i7-2760QM, BIOS 1.42 1920x1080 FHD, 32 GB RAM, 1000M NVIDIA GPU, Crucial M500 480GB mSata SSD, Hitachi 500GB HDD, WD 2TB USB 3.0What kind of software do you use for the conversion to pdf? Adobe Reader can't create pdf files.
-
2 Workflow Notifications regions in Home page?
Hello,
I'm working in a 11.5.10.2 environment.
Has anyone tried to "put" 2 Workflow Worklists in the Home Page?
My customer wants to see both and work with them independently. In the same page !!!!!!!!!!!!
Is this possible? Are there implications in the AM's ?? Doesn't OAF engine get confused with 2 AM's refered at "the same page" at the same time?
Thanks for your help.
JuanjeIf you are planning to add the same worklist region in your page twice, then it is not supported. OAF does not support adding the same shared region twice in the page.
-
Just wondering... What is a good way to manage large pages?
Let's say a page with a pannelTabbed with 7 or 8 tabs. Each tab shows a table that is bound to a different VO. Most of the tabs also have a add,delete,edit button and i'm using popups for that.
So as you can see, their will be lots of bindings and lots of components on such a page.
If i create all this on a single page i think it will be a few thousands of lines which is not realy nice...
Should i create page fragments or create taskflows per tab or something like that?
Currently i have created the page with just the panelTabbed and then for each tab i created a taskflow and dropped that inside the showDetailItem. For each popup, i also created a taskflow so i could reuse it later when i need the same popup in other pages.
I'm wondering... what is a correct approach for such large pages. Are their any guidlines for this?Hi,
we decided to use dynamic regions (11g) for our application.
This means we only have 1 jspx for the whole application and exchange the content at runtime.
For each "block" (e.g. a table, a tab or a popup) we have a single page fragment and task flow.
One page fragment consists normaly only of one view object.
With this concept we can reuse e.g. the same (similar) table on different pages too.
Hope this helps.
regards
Peter
Maybe you are looking for
-
Caller ID not working in Mexico
Hi all. I have read a lot of documents and applied configuration changes to the voice-ports in my UC-520 but until now I can not get to work the incoming Caller ID function; I think it is a matter of matching parameters with my telephony company: Tel
-
Problem with generics in general framework
Hi, I've just started using generics and I've been able to solve most of my problems with type declarations etc, but I still have a few problems left. My current problem is in a class which has a map of classes which implements a generic typed interf
-
Hi... Another thread with that same, old subject... right? Perhaps yes!! But I am not able to move further without help. I am developing an application where user needs to login by entering the password. My requirement is to encrypt the password firs
-
Where is Prod Order status stored
I need to create a query of all Prod Orders Not delivered and Partially delivered. Where are these status stored in the AUFK . The AUFK-ASTNR does not have DLV or PDLV status.Thanks
-
Losing datatype definition from a collection type
Hi, When I reload a model design from an xml repository, I am losing the datatype definition from a collection type. It goes like this: - In a collection type, the datatype is selected from a list of structure types (pre-defined). - I save the whole