Swap utilization problem?
Hi
i have use Oracle 10g Database control, in everyday i screen shot the performance image (24 hours)
but yesterday i have screen shot image ,the memory untilization is 100%
how can utilizaion is 100% ? and what is solution
Hi,
'how can utilizaion is 100% ? and what is solution '
Well this is a question on which you should give an answer.
You have to know what was going on in your database.
Using Dbcontrol i would use AWR feature and work with the snapshot saved in that period.
Acr
Similar Messages
-
Oracle Linux - Swap Utilization
I am running Oracle 11.2.0.2 on Oracle Linux x86 64-bit.
In looking at OEM, I see my swap utilization reported at 17.01% and I see virtual memory paging listed in the ADDM Performance and Analysis section:
Host operating system was experiencing significant paging but no particular root cause could be detected. Investigate processes that do not belong to this instance running on the host that are consuming significant amount of virtual memory. Also consider adding more physical memory to the host.
There no additional or non-standard processes running on this server.
For those of you that are running similar configurations, does this seem normal or typical for you? Might you guess that I have potential problems with swapping or is this normal or an EOM “false alarm”?
Additional info:
My server is virtual. I am using Huge Pages and I am certain my SGA is fully contained in there.
Memory Statistics
Host Mem (MB): 14,031
SGA use (MB): 8,192
PGA use (MB): 509
% Host Mem used for SGA+PGA: 62.01
Thanks for your time.below takes 1 minute to complete
as long as (so+si) less than (bo+bi) then RAM is NOT a bottleneck
bcm@bcm-laptop:~$ vmstat 6 10
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 9776 105876 290164 1935116 0 0 33 38 228 300 9 4 86 1
2 0 9776 103364 290172 1937504 0 0 19 423 700 1585 23 3 70 4
0 0 9776 118616 290176 1940692 0 0 0 28 962 2483 10 5 85 0
0 0 9776 113796 290180 1940728 0 0 3 126 941 2355 10 4 85 1
1 0 9776 113796 290220 1940696 0 0 0 86 786 1779 4 3 91 3
0 0 9776 113704 290236 1940696 0 0 0 20 812 1839 4 3 92 1
1 0 9776 113704 290300 1940708 0 0 1 81 755 1706 4 2 92 2
0 0 9776 108604 290324 1940716 0 0 1 56 770 1810 5 2 92 1
0 0 9776 108108 290340 1940752 0 0 1 1681 790 1797 3 3 89 5
0 0 9776 108108 290356 1940752 0 0 0 120 851 1736 4 2 92 1
bcm@bcm-laptop:~$ -
Hi,
I am running 2 database instances of Oracle 11g on Solaris 10 (SPARC T-5120).
At the OS level it seems that the swap is not being used at all, but at the OEM, it shows the "Swap Utlization %" is about 80%
bash-3.00$ swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s3 32,3 16 36877808 36877808
Could someone please help me to understand these two indicator and to which one I need to pay attention.
Also, I noticed that if I increase the "memory_max_target" parameter, the "Swap Utilization%" increases.
Regards,
AShumashum wrote:
I don't know how to format it to make it more readable on the post.
Any advice?
Thanks,
Alberto
SQL and PL/SQL FAQ
scroll down to #9 & do as it says
Find my second post in this thread which contains the line below
What columns are produced by your version of vmstat?What I posted was from
man vmstat
that documents the output from the vmstat command.
you need to do the same
man vmstat
then COPY the actual description for each column of the output from vmstat & PASTE all here
be sure to wrap the text in tags -
Oracle Developer Suite 10g - Windows 7 - swap space problem
Hi brothers and sisters,
Hope all you are fine. I am facing swap space problem while installing Oracle Developer Suite 10g on Windows 7 . I have downloaded and installed Oracle Database 11g Express Edition successfully and it is working properly. Now, I have downloaded two files
a. ds_windows_x86_101202_disk1,
b. ds_windows_x86_101202_disk2
to install Oracle Developer Suite 10g using Windows 7 Ultimate. The Oracle Universal Installer have passed the following issues:
a. Operating system version,
b. Monitor
But while checking swap space, it gives a message, "0 MB available, 1535 MB required", Failed. I have 2 GB RAM installed and system managed paging file space which is currently allocated to 2038 MB in my system .
Please solve me the problem.
Noorincrease the system virtual memory for the OS drive.
check this link
http://hamid-oracle.blogspot.com/2011/06/how-to-install-oracle-developer-suite.html
also check this link.
Re: error installing oracle forms 11g release 2
Hope this helps
If someone response is correct or helpful, mark accordingly. and if answered mark answered.
Edited by: HamidHelal on Jun 27, 2012 2:28 PM -
Swap utilization is 100 %
Hi All,
In my production system swap utilization is 100% that is causing performance issues, when we execute top & Glance command , couldn't find workprocess which are using more memoery here i could see only 2 process are using 2 gb memory, actually we have 32GB memoery. also we have checked in sm04, ther also no workprocess is using much memory. In St02 detailed analysis also utilizing memory is 3gb. i could see occupied physical memoery in ST06. there only 1GB free meomery.
Could you please some one help me how we can find where this physicall memory is utilizing.What's your current physical memory size?
Just set all memory related parameters according to your physical memory and re-check.
Increase your physical memory if required (contact SAP for fine tuning)
Regards,
Nick Loy -
EM alert Message : Swap Utilization is 100%
Hi,
on 10G R2, EM sends us the following alert :
Message=Swap Utilization is 100%And in details when I look, it says : examine the applications that do not belong to this instance.
I wonder if SGA and PGA are well tunned ?
I hyave the followings :
SQL> show sga
Total System Global Area 1476395008 bytes
Fixed Size 1251172 bytes
Variable Size 293603484 bytes
Database Buffers 1174405120 bytes
Redo Buffers 7135232 bytes
SQL> show parameter target
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 2880
fast_start_io_target integer 0
fast_start_mttr_target integer 0
pga_aggregate_target big integer 1100M
sga_target big integer 5504MMany thanks.Hi,
today RMAN backup failed with :
ORA-04030 :out of process memory when trying to allocate 2457618 bytes (pga heap,zbits_kgcstate)might this error be related to EM alert :
Swap Utilization IS 100%
according to mrtalink note 373602.1 we should be in automatic SGA to avoid ORA-04030 , and we are then why ORA-04030 ?
SQL> SHOW parameter target
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 2880
fast_start_io_target integer 0
fast_start_mttr_target integer 0
pga_aggregate_target big integer 1100M
sga_target big integer 5504MThanks before. -
Swap Utilization is 100% - cant clear alert
We have had the Swap Utilization is 100%, crossed warning (35) or critical ( ) threshold hanging around for a month now and I cant clear it.
The server has been rebooted and there is no swapping.
I found this http://www.ora-solutions.net/web/2010/11/12/grid-control-11g-agent-metric-swap-utilization-on-hp-ux-with-pseudo-swap/
but its not for our platform nor our Grid Agent version!
Any ideas please?
Linux RHEL 5
Oracle Enterprise Manager 10g Release 5 Grid Control 10.2.0.5.0If you find that Metric via the host that it is configured on, just clear the fields that are there for that metric.
Then when you can see those alerts cleared, re-enter the values and carry on regardless.
DA -
Oracle Coherence increasing Swap Utilization
We are using Oracle Coherence on linux servers. However, we noticed that because of Coherence processes running, often our swap utilization % increases too much, sometimes becoming more than 98%, even touched 100% a few times.
Once we kill all the coherence related processes, then it becomes normal.
Is there any way we can make coherence processes to only use a particular size of Swap space ?
Currently increasing swap space is not in our scope.
Please suggest.
Edited by: user7761515 on May 3, 2012 11:29 AMHi,
We are using Oracle Coherence on linux servers. However, we noticed that because of Coherence processes running, often our swap utilization % increases too much, sometimes becoming more than 98%, even touched 100% a few times.
Swapping itself (1%-100%) is not a good sign and should be avoided by ensure that you have sufficient memory such that you are not making active use of swap space on your machines. The active usage of SWAP space will have significant performance degradation.
Is there any way we can make coherence processes to only use a particular size of Swap space ?Manage your memory by allocating heap using -Xmx for Coherence JVMs. You need to ensure that the sufficient RAM memory is available on the server for Coherence JVMs and other operating system processes and do not consume all the RAM.
To temporarily set the swappiness, as the root user echo a value onto /proc/sys/vm/swappiness. The following command will set swappniess to 0:
echo 0 >/proc/sys/vm/swappiness //To set the value permanently, modify the /etc/sysctl.conf file.
Hope this helps!
Cheers,
NJ -
Available swap space problem in solaris 10
Dear All,
Currently I am facing a most interesting problem regarding swap space.
We have assigned mirrored slice swap space with 20G size. But when we are asking for swap space it is only showing around 2GB.
1. Please have the swap area:
root@palash # swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d1 85,1 16 41945456 35002784
root@palash # swap -s
total: 53562072k bytes allocated + 5689008k reserved = 59251080k used, 615704k available
2. Please have the metastat output:
root@palash # metastat d1
d1: Mirror
Submirror 0: d11
State: Okay
Submirror 1: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 41945472 blocks (20 GB)
d11: Submirror of d1
State: Okay
Size: 41945472 blocks (20 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t0d0s1 0 No Okay Yes
d21: Submirror of d1
State: Okay
Size: 41945472 blocks (20 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t1d0s1 0 No Okay Yes
3. Here is /etc/vfstab entry for swap:
/dev/md/dsk/d1 - - swap - no -
4. But when we checked swap space using df command:
root@palash # df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 18G 7.7G 9.9G 44% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 1.7G 1.7M 1.7G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
18G 7.7G 9.9G 44% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
18G 7.7G 9.9G 44% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 1.7G 184K 1.7G 1% /tmp
swap 1.7G 64K 1.7G 1% /var/run
/dev/dsk/c2t40d0s4 20G 20M 19G 1% /redo2
/dev/dsk/c2t40d0s1 276G 223G 50G 82% /oradata2
/dev/dsk/c3t40d0s3 195G 198M 193G 1% /restore
/dev/dsk/c2t40d0s0 276G 216G 57G 80% /oradata1
/dev/dsk/c2t40d0s5 59G 60M 58G 1% /system1
/dev/dsk/c3t40d0s0 276G 220G 53G 81% /index1
/dev/md/dsk/d3 30G 24G 5.7G 81% /oracle
/dev/dsk/c3t40d0s1 197G 53G 142G 28% /archive1
/dev/dsk/c2t40d0s3 20G 20M 19G 1% /redo1
/vol/dev/dsk/c0t6d0/disk1
0K 0K 0K 0% /cdrom/disk1
Please help me to indentify the reason.918597 wrote:
Hello,
Thanks for your reply.
But I am not clear about your findings as we have around 64GB physical RAM in my machine.
My question is that if we mount 20GB swap partition, then how we can see this is around less than 2 GB in df -h command.
And even in swap -s command, this is showing same problem.
What might be reason behind this????
//PalashWell no-one else has anwered ... so its back to me.....
Hmmm ... i would use the word observations rather than findings .... I am not on an old root explotation expedition all over our server .. merely observing on the morsels you show me ....
Please be aware I may not be totally technically correct or be using right terminology on what follows so I welcome corrections ....
Your machine has 64GB Memory, and a 20 GB swap file, therefore has the ability up to support 84GB total (Virtual) Memory ofr it's processes/buffers.
The reason little swap is free (let us say 2GB ... thoug it may be 600m) is theat processes/buffers have a virutal memory requirement of 84GB-2GB = 82Gb.
... So rather than wondering about how come 2GB is left, start thinking about ow 82GB is being used.
...... Particularly with databases oracle RDBMS; mysql etc a big influencing factor can be how much memory is allocated its memory structures.
.......... And a DBA may set these extremely high (memory_max / innodb-buffer-pool-size etc etc).
ps -ef will show the (virtual) 'size' for individual processes
prstat -a may help show what is going on (but may double account some things):
ipcs -a would show the allocation for the oracle RDBMS memory_area,
echo ::memstat | mdb -k ### may help ... but i have seen accounts of it taking ages to run.
I'd also check kstat zfs ... but your not using zfs so no need to bother.
.... You may need to show some evidence of how your applications are consuming virutal memory for someone to help you futher ... but if you do htis who may answer your own quesiton. -
I've encountered a problem with log file utilization during a somwhat long transaction during which some data is inserted in a StoredMap.
I've set the minUtilization property to 75%. During insertion, things seem to go smoothly, but at one point log files are created WAY more rapidly than what the amount of data would call for. The test involves inserting 750K entries for a total of 9Mb, the total size of log files is 359 Mb. Using DbSpace shows that the first few log files use approx 65% of their total space, but most only use 2%.
I understand that during a transaction, the Cleaner may not clean the log files involved. What I don't understand is why are most of the log files only using 2%:
File Size (KB) % Used
00000000 9763 56
00000001 9764 68
00000002 9765 68
00000003 9765 69
00000004 9765 69
00000005 9765 69
00000006 9765 68
00000007 9765 70
00000008 9764 68
00000009 9765 61
0000000a 9763 61
0000000b 9764 25
0000000c 9763 2
0000000d 9763 1
0000000e 9763 2
0000000f 9763 1
00000010 9764 2
00000011 9764 1
00000012 9764 2
00000013 9764 1
00000014 9764 2
00000015 9763 1
00000016 9763 2
00000017 9763 1
00000018 9763 2
00000019 9763 1
0000001a 9765 2
0000001b 9765 1
0000001c 9765 2
0000001d 9763 1
0000001e 9765 2
0000001f 9765 1
00000020 9764 2
00000021 9765 1
00000022 9765 2
00000023 9765 1
00000024 9763 2
00000025 7028 2
TOTALS 368319 21
I've created a test class that reproduces the problem. It might be possible to make it even more simple, but I haven't had time to work on it to much.
Executing this test with 500K values does not reproduce the problem. Can someone please help me shed some light on this issue?
I'm using 3.2.13 and the following properties file:
je.env.isTransactional=true
je.env.isLocking=true
je.env.isReadOnly=false
je.env.recovery=true
je.log.fileMax=10000000
je.cleaner.minUtilization=75
je.cleaner.lookAheadCacheSize=262144
je.cleaner.readSize=1048576
je.maxMemory=104857600
Test Class
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import com.sleepycat.bind.EntityBinding;
import com.sleepycat.bind.EntryBinding;
import com.sleepycat.bind.tuple.StringBinding;
import com.sleepycat.bind.tuple.TupleBinding;
import com.sleepycat.collections.CurrentTransaction;
import com.sleepycat.collections.StoredMap;
import com.sleepycat.je.Database;
import com.sleepycat.je.DatabaseConfig;
import com.sleepycat.je.DatabaseEntry;
import com.sleepycat.je.DatabaseException;
import com.sleepycat.je.Environment;
import com.sleepycat.je.EnvironmentConfig;
public class LogFileTest3 {
private long totalSize = 0;
private Environment env;
private Database myDb;
private StoredMap storedMap_ = null;
public LogFileTest3() throws DatabaseException, FileNotFoundException, IOException {
Properties props = new Properties();
props.load(new FileInputStream("test3.properties"));
EnvironmentConfig envConfig = new EnvironmentConfig(props);
envConfig.setAllowCreate(true);
File envDir = new File("test3");
if(envDir.exists()==false) {
envDir.mkdir();
env = new Environment(envDir, envConfig);
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(true);
dbConfig.setSortedDuplicates(false);
myDb = env.openDatabase(null, "testing", dbConfig);
EntryBinding keyBinding = TupleBinding.getPrimitiveBinding(String.class);
EntityBinding valueBinding = new TestValueBinding();
storedMap_ = new StoredMap(myDb, keyBinding, valueBinding, true);
public void cleanup() throws Exception {
myDb.close();
env.close();
private void insertValues(int count) throws DatabaseException {
CurrentTransaction ct = CurrentTransaction.getInstance(this.env);
try {
ct.beginTransaction(null);
int i = 0;
while(i < count) {
TestValue tv = createTestValue(i++);
storedMap_.put(tv.key, tv);
System.out.println("Written "+i+" values for a total of " totalSize" bytes");
ct.commitTransaction();
} catch(Throwable t) {
System.out.println("Exception " + t);
t.printStackTrace();
ct.abortTransaction();
private TestValue createTestValue(int i) {
TestValue t = new TestValue();
t.key = "key_"+i;
t.value = "value_"+i;
return t;
public static void main(String[] args) throws Exception {
LogFileTest3 test = new LogFileTest3();
if(args[0].equalsIgnoreCase("clean")) {
while(test.env.cleanLog() != 0);
} else {
test.insertValues(Integer.parseInt(args[0]));
test.cleanup();
static private class TestValue {
String key = null;
String value = null;
private class TestValueBinding implements EntityBinding {
public Object entryToObject(DatabaseEntry key, DatabaseEntry entry) {
TestValue t = new TestValue();
t.key = StringBinding.entryToString(key);
t.value = StringBinding.entryToString(key);
return t;
public void objectToData(Object o, DatabaseEntry entry) {
TestValue t = (TestValue)o;
StringBinding.stringToEntry(t.value, entry);
totalSize += entry.getSize();
public void objectToKey(Object o, DatabaseEntry entry) {
TestValue t = (TestValue)o;
StringBinding.stringToEntry(t.key, entry);
}Yup, that solves the issue. By doubling the
je.maxMemory property, I've made the problem
disapear.Good!
How large is the lock on 64 bit architecture?Here's the complete picture for read and write locks. Read locks are taken on get() calls without LockMode.RMW, and write locks are taken on get() calls with RMW and all put() and delete() calls.
Arch Read Lock Write Lock
32b 96B 128B
64b 176B 216B
I'm setting the je.maxMemory property becauce I'm
dealing with many small JE environments in a single
VM. I don't want each opened environment to use 90%
of the JVM RAM...OK, I understand.
I've noticed that the je.maxMemory property is
mutable at runtime. Would setting a large value
before long transactions (and resetting it after) be
a feasable solution to my problem? Do you see any
potential issue by doing this?We made the cache size mutable for just this sort of use case. So this is probably worth trying. Of course, to avoid OutOfMemoryError you'll have to reduce the cache size of other environments if you don't have enough unused space in the heap.
Is there a way for me to have JE lock multiple
records at the same time? I mean have it create a
lock for a insert batch instead of every item in the
batch...Not currently. But speaking of possible future changes there are two things that may be of interest to you:
1) For large transaction support we have discussed the idea of providing a new API that locks an entire Database. While a Database is locked by a single transaction, no individual record locks would be needed. However, all other transactions would be blocked from using the Database. More specifically, a Database read lock would block other transactions from writing and a Database write lock would block all access by other transactions. This is the equivalent of "table locking" in relational DBs. This is not currently high on our priority list, but we are gathering input on this issue. We are interested in whether or not a whole Database lock would work for you -- would it?
2) We see more and more users like yourself that open multiple environments in a single JVM. Although the cache size is mutable, this puts the burden of efficient memory management onto the application. To solve this problem, we intend to add the option of a shared JE cache for all environments in a JVM process. The entire cache would be managed by an LRU algorithm, so if one environment needs more memory than another, the cache dynamically adjusts. This is high on our priority list, although per Oracle policy I can't say anything about when it will be available.
Besides increasing the je.maxMemory, do you see any
other solution to my problem?Use smaller transactions. ;-) Seriously, if you have not already ruled this out, you may want to consider whether you really need an atomic transaction. We also support non-transactional access and even a non-locking mode for off-line bulk loads.
Thanks a bunch for your help!You're welcome!
Mark -
CPU utilization problem with JSF
Hi,
I am using MyFaces 1.1.4 and tomahawk 1.1.3.
I am using EJB 2.0 and weblogic server 9.2
I have a problem with the performance of my application. The response time for 500 concurrent users is arnd 10sec. However, the CPU utilization normally hits *95-97%.*
I have disabled all the logging to reduce the file IO. Also, all the code has been optimised. Still the problem exists. Now, I cannot think of any way to overcome this. My project is getting delayed because of this.
Request you all to pls help.
Regards,
Milan.What type of server or servers are you using. I am assuming which I hate to do is that your EJB's are for CRUD and you a connection pool. Have tried to optimize the DB connection pool's ?.
-
My N8 had to be repaired but they gave me a new one (1 year ago) under guarantee, now that Belle rolled out I discovered that the product code is 059B6D7. Which I figured out was a swap-phone through these boards.
Now I contacted my carrier Telenor, they said I needed to contact Nokia Support, which I did. Calling them left me with the following reply "Be patient!" in some kind rude tone, sounding like he had been asked many times that day questions about Belle. I don't blame him.
I called again to see if I could get hold of another person, he was a bit more helpful but still tried to get me to update through Nokia Suite even when I explained gently to him that it was a swap-phone that shouldn't even be out in public. But left me with this: "Contact a Nokia Care point." well okay I said.
Problem is, I'm Norway. There is no Nokia Care Points.Dudebroman, i understand your pain. I too have a "swap" code phone after I had to have its motherboard replaced because the xenon flash just "burned out" and the camera got real dark and couldn't shoot decent photos not even in broad daylight. I found out now that they gave me a motherboard that had the 059B288 SWAP code. I'm actually lucky that there are quite many nokia "care" centers in my country (Romania), but I still had to walk a few hours through the 1m snow to get to one. I told them if they can install Belle on my phone and even change this damn code so I don't have to return to them EVERY time I have to update or reinstall the OS after I format the phone. They told me that they can't change the code because it's the motherboard code that can't be changed no matter what and that they'll try to update the phone.
I really have no idea what's the use of all this that's going on. I thought that it may be so that they combat counterfeiting but then I thought that those codes are all coming from motherboards that Nokia themselves produced, so why don't enable updates for all of them? I hope they do, eventually, if not, someone would probably find a way to crack that code and let you change it. I saw, on the internet, programs that are supposed to change the product ID of the motherboard to whatever you choose, people say it works for lots of phones but not for N8 yet.
MODERATORS NOTE: The offensive language has been removed. Offensive language is not allowed on the forums. Please do not use offensive language on the forums -
Very urgent-Monthly utilization problem
Hi All
I m having problem when i am trying to do monthly utilization
When i am executing T-Code J2IUN ,I have selected the plant and excise gruop
After that it is displaying
RG23ABED
RG23CBED
RG23ASED
RG23CSED
Correctly that is it is showing all positive amount
But PLABED it is showing Negative amount
We have checked with the abaper ,he told us that it is taking current fiscal year amount and previous yr i.e 2007-2008 it is not taking
What can be the possible solution
Awaiting for quick response
Point will be duly awarded for the answers
regds
sahileshDear Shailesh,
The Balance which is shown in J2IUN transaction is for all the business areas for a particular companycode. When you are doing a utilization for a particular excise group & for a particular period (say Today) then system will show the PLA balance for all the business area in J2IUN for the fiscal year 2008 if your fiscal year variant is fro April-March. So check the G/L balances of the duties which are shown correctly as well as the PLA G/L balance & also check your PLA G/L account assignment in
SPRO/Logistics general/Tax on goods movements/Account determination
Thanks & Regards,
SAP FC -
Dreamweaver Firefox swap image problem
I have a navigation bar that uses swap images and hidden
layers. It worked from spring 2007 until recently. Now in Firefox
nothing happens when the mouse goes over the image. I have been
told the same problem has recently begun with Safari. It still
works fine in Internet Explorer. What has changed and how can I fix
this problem?
My website is: www.fumcpalestine.com
My problem has be resolved. It was caused by z index of
heading overlaying the z index of navigation bar in case others
have the same problem.There is an error in your doctype declaration that is likely making IE soil itself.
Change...
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
to...
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
And see if that helps. -
Hi all,
I having problem in BI 7.0 server for swap memory goes to RED color... I know some variables and paramaters, but I am not sure how to calculate (depend on 12GB physical memory)
Screenshot here... if anyone know how to fix that I really appriciated
http://www.flickr.com/photos/38842895@N04/4538676180/sizes/o/
Thank in advanceDouble click on the red highlighted swap, now click on current parameters.
Here you can see which parameters designates the memory for this buffer area. Once buffer is exceeded it begins to SWAP.
this is not a bad thing unless you are seeing performance problems. Most performance problems are far deeper than this.
You now need to research the overall memory usage and decide whether to increase the parameter and possibly decrease others.
use ST06 as well for overall usage amounts of swap.
There is much to understand but you can start by reading here:
http://help.sap.com/saphelp_nw70/helpdata/en/34/d9c8b9c23c11d188b40000e83539c3/content.htm
Edited by: Augustus Cristicini on Apr 20, 2010 10:05 PM
Maybe you are looking for
-
AutomationException while calling a method
Hi all, I've got a problem accessing a COM/DCOM component in a Win2k box (Advanced Server) with jcom. I configured the server and client described in the "JSP to COM" example. The only difference: I don't want to access the excel sheet and I don't us
-
XI Mail Adapter: sending emails with attachment with help of java mapping
Hi , On trying out the scenerio mentioned in the blog, using the java mapping provided "XI Mail Adapter: An approach for sending emails with attachment with help of Java mapping The scenerio works just fine. But the payload as the content of the atta
-
Adobe muse and serial number from creative cloud?
Hello I have bought Adobe Muse, and when I try to start it. Adobe whants me to write down a Serial Number. Where can i find that?
-
Can't watch DVDs or play games on Equium A60
Hi I updated my BIOS about three months ago because I was having issues with ati2dvag.dll, and that seems to have solved the problem. I tried every fix I could find for that particular BSOD including changing the driver for my PCI-to-PCI bridge. Only
-
Huge problem with Bridge's ability to keep files stacked.
I've been working on sorting 8000 image files using Adobe Bridge. After stacking all my images, I batch renamed just 15 files. After renaming the files the program for whatever reason unstacked all my ******* files!!!! Is their any way to get the 35+