Memory hungry linux

when my system had 512M of ram, linux (with gnome as a desktop) used about 98% of it (top tells me the used part was 500M). so i ugrade, i buy another stick of 512. now top shows a use of 1000M. what the hell is taking all that memory? and why is it that every new stick of RAM i put in is invaded? in windows XP, my memory usage is a constant 200-300M. is top buggy?

ok, i understand. so i guess this is normal :
Mem: 1033632k total, 1000600k used, 33032k free, 106088k buffers
Swap: 265064k total, 8k used, 265056k free, 618108k cached
but look at this, firefox is a memory gluton (just one browser window open):
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5087 david 15 0 53376 36m 31m S 0.0 3.6 1:24.92 firefox-bin
5088 david 16 0 53376 36m 31m S 0.0 3.6 0:00.00 firefox-bin
5089 david 16 0 53376 36m 31m S 0.0 3.6 0:00.22 firefox-bin
5091 david 15 0 53376 36m 31m S 0.0 3.6 0:00.66 firefox-bin
why four? i don't know.
i like XFCE4, i might just switch.

Similar Messages

  • Error message: ORA-27125: unable to create shared memory segment Linux-x86_

    Hi,
    I am doing an installtion of SAP Netweaver 2004s SR3 on SusE Linux 11/Oracle 10.2
    But i am facing the follow issue in Create Database phase of SAPInst.
    An error occurred while processing service SAP NetWeaver 7.0 Support Release 3 > SAP Systems > Oracle > Central System > Central System( Last error reported by the step :Caught ESAPinstException in Modulecall: ORA-27125: unable to create shared memory segment Linux-x86_64 Error: 1: Operation not permitted Disconnected
    Please help me to resolve the issue.
    Thanks,
    Nishitha

    Hi Ratnajit,
    I am too facing the same error but my ORACLE is not starting,
    Here are my results of following command:
    cat /etc/sysctl.conf
    # created by /sapmnt/pss-linux/scripts/sysctl.pl on Wed Oct 23 22:55:01 CEST 2013
    fs.inotify.max_user_watches = 65536
    kernel.randomize_va_space = 0
    ##kernel.sem = 1250 256000 100 8192
    kernel.sysrq = 1
    net.ipv4.conf.all.promote_secondaries = 1
    net.ipv4.conf.all.rp_filter = 0
    net.ipv4.conf.default.promote_secondaries = 1
    net.ipv4.icmp_echo_ignore_broadcasts = 1
    net.ipv4.neigh.default.gc_thresh1 = 256
    net.ipv4.neigh.default.gc_thresh2 = 1024
    net.ipv4.neigh.default.gc_thresh3 = 4096
    net.ipv6.neigh.default.gc_thresh1 = 256
    net.ipv6.neigh.default.gc_thresh2 = 1024
    net.ipv6.neigh.default.gc_thresh3 = 4096
    vm.max_map_count = 2000000
    # Modified for SAP on 2013-10-24 07:14:17 UTC
    #kernel.shmall = 2097152
    kernel.shmall = 16515072
    # Modified for SAP on 2013-10-24 07:14:17 UTC
    #kernel.shmmax = 2147483648
    kernel.shmmax = 67645734912
    kernel.shmmni = 4096
    # semaphores: semmsl, semmns, semopm, semmni
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    And here is mine Limit.conf File
    cat /etc/security/limits.conf
    #<domain>      <type>  <item>         <value>
    #*               soft    core            0
    #*               hard    rss             10000
    #@student        hard    nproc           20
    #@faculty        soft    nproc           20
    #@faculty        hard    nproc           50
    #ftp             hard    nproc           0
    #@student        -       maxlogins       4
    # Added for SAP on 2012-03-14 10:38:15 UTC
    #@sapsys          soft    nofile          32800
    #@sapsys          hard    nofile          32800
    #@sdba            soft    nofile          32800
    #@sdba            hard    nofile          32800
    #@dba             soft    nofile          32800
    #@dba             hard    nofile          32800
    # End of file
    # Added for SAP on 2013-10-24
    #               soft    nproc   2047
    #               hard    nproc   16384
    #               soft    nofile  1024
    #               hard    nofile  65536
    @sapsys                 soft   nofile          131072
    @sapsys                 hard   nofile         131072
    @sdba                  soft  nproc          131072
    @sdba                  hard   nproc         131072
    @dba                 soft    core           unlimited
    @dba                 hard     core          unlimited
                      soft     memlock       50000000
                      hard     memlock       50000000
    Here is mine   cat /proc/meminfo
    MemTotal:       33015980 kB
    MemFree:        29890028 kB
    Buffers:           82588 kB
    Cached:          1451480 kB
    SwapCached:            0 kB
    Active:          1920304 kB
    Inactive:         749188 kB
    Active(anon):    1136212 kB
    Inactive(anon):    39128 kB
    Active(file):     784092 kB
    Inactive(file):   710060 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:      33553404 kB
    SwapFree:       33553404 kB
    Dirty:              1888 kB
    Writeback:             0 kB
    AnonPages:       1135436 kB
    Mapped:           161144 kB
    Shmem:             39928 kB
    Slab:              84096 kB
    SReclaimable:      44400 kB
    SUnreclaim:        39696 kB
    KernelStack:        2840 kB
    PageTables:        10544 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:    50061392 kB
    Committed_AS:    1364300 kB
    VmallocTotal:   34359738367 kB
    VmallocUsed:      342156 kB
    VmallocChunk:   34359386308 kB
    HardwareCorrupted:     0 kB
    AnonHugePages:    622592 kB
    HugePages_Total:       0
    HugePages_Free:        0
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    DirectMap4k:       67584 kB
    DirectMap2M:    33486848 kB
    Please let me know where i am going wrong.
    Wat thing basically u check on /proc/meminfo command
    Regards,
    Dipak

  • Oracle taking too much memory in Linux

    Hi.. I am using Oralce on linux 7.5
    No any active session is running on server for oracle ..
    But oracle taking too much memory in linux when I check with TOP command ..
    16:08:29 up 19 days, 20:14, 2 users, load average: 5.32, 4.63, 3.78
    227 processes: 226 sleeping, 1 running, 0 zombie, 0 stopped
    CPU states: cpu user nice system irq softirq iowait idle
    total 2.5% 0.0% 2.9% 0.4% 0.0% 91.8% 2.2%
    cpu00 3.8% 0.0% 3.8% 0.0% 0.0% 92.2% 0.0%
    cpu01 0.4% 0.0% 0.0% 0.0% 0.0% 99.5% 0.0%
    cpu02 0.8% 0.0% 4.7% 1.7% 0.0% 85.7% 6.8%
    cpu03 5.1% 0.0% 3.0% 0.0% 0.0% 89.6% 2.1%
    Mem: 8203404k av, 8186192k used, 17212k free, 0k shrd, 7292k buff
    6266516k actv, 1275016k in_d, 136704k in_c
    Swap: 6289384k av, 10528k used, 6278856k free 7602904k cached
    PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
    6224 oracle 15 0 1007M 1.0G 998M D 0.0 12.5 30:45 1 oracle
    10628 oracle 15 0 1000M 998M 998M S 0.0 12.4 0:16 0 oracle
    6230 oracle 15 0 945M 945M 943M S 0.0 11.7 82:33 1 oracle
    10188 oracle 15 0 944M 942M 931M S 0.0 11.7 3:02 1 oracle
    19128 oracle 15 0 909M 901M 899M S 0.0 11.2 2:49 2 oracle
    6864 oracle 15 0 886M 886M 877M S 0.0 11.0 6:12 3 oracle
    18586 oracle 22 0 883M 882M 880M D 1.9 11.0 1:15 0 oracle
    6870 oracle 15 0 433M 432M 431M S 0.0 5.4 0:21 2 oracle
    6421 oracle 15 0 337M 336M 335M S 0.0 4.2 1:07 2 oracle
    8478 oracle 15 0 272M 271M 269M S 0.0 3.3 0:12 1 oracle
    18250 oracle 15 0 256M 255M 252M S 0.0 3.1 0:38 0 oracle
    6876 oracle 15 0 251M 250M 249M S 0.0 3.1 1:03 0 oracle
    17926 oracle 15 0 225M 224M 223M S 0.0 2.8 0:01 1 oracle
    19320 oracle 15 0 197M 196M 195M S 0.0 2.4 0:23 1 oracle
    18116 oracle 15 0 168M 167M 165M S 0.0 2.0 0:04 2 oracle
    19596 oracle 15 0 159M 158M 155M D 2.2 1.9 0:17 0 oracle
    6228 oracle 15 0 156M 155M 142M S 0.0 1.9 12:05 1 oracle
    18902 oracle 15 0 81612 71M 48364 S 0.0 0.8 3:07 3 oracle
    19286 oracle 15 0 48420 46M 46412 S 0.0 0.5 0:03 0 oracle
    6222 oracle 15 0 41496 40M 39684 S 0.0 0.4 9:38 3 oracle
    19602 oracle 15 0 40324 38M 38164 S 0.0 0.4 0:00 2 oracle
    5962 mysql 25 0 39168 38M 88 S 0.0 0.4 0:00 1 mysqld
    5963 mysql 15 0 39168 38M 88 S 0.0 0.4 0:08 0 mysqld
    5964 mysql 20 0 39168 38M 88 S 0.0 0.4 0:00 0 mysqld
    5965 mysql 25 0 39168 38M 88 S 0.0 0.4 0:00 0 mysqld
    5966 mysql 25 0 39168 38M 88 S 0.0 0.4 0:00 0 mysqld
    5967 mysql 20 0 39168 38M 88 S 0.0 0.4 0:00 0 mysqld
    5969 mysql 15 0 39168 38M 88 S 0.0 0.4 2:16 2 mysqld
    5970 mysql 15 0 39168 38M 88 S 0.0 0.4 2:31 1 mysqld
    5971 mysql 24 0 39168 38M 88 S 0.0 0.4 0:00 0 mysqld
    5972 mysql 20 0 39168 38M 88 S 0.0 0.4 0:00 0 mysqld
    6868 oracle 15 0 27312 26M 15204 S 0.0 0.3 8:21 3 oracle
    19554 oracle 15 0 20920 19M 18192 S 0.0 0.2 0:00 3 oracle
    6874 oracle 15 0 16904 16M 15400 S 0.0 0.2 1:15 1 oracle
    19444 oracle 15 0 13636 12M 12004 S 0.0 0.1 0:00 1 oracle
    19442 oracle 15 0 12964 12M 11324 S 0.0 0.1 0:00 1 oracle
    19474 oracle 15 0 12708 12M 11076 S 0.0 0.1 0:00 2 oracle
    Why oracle taking too much memory?
    Any solution for this bec of that my updation takes too much time .
    Edited by: harshalpatil on Oct 13, 2008 4:11 PM

    Harsh,
    1) What version of Oracle you running on 7.5 release of linux?
    2)How did you come to the conclusion that because of this, your update is slow?
    Cheers
    Aman....

  • Dynamic Memory on Linux VM

    Hello!
    Hyper-V 3.0 is great! After it will be released, I think it will become the most popular hypervisor. But it remains a major drawback.
    Nowhere announced support for dynamic memory for Linux VM on Hyper-V.
    Planned at least in some perspective to implement this functionality?
    Now we have to use two different hypervisors, as Hyper-V does not meet all the requirements of our customers.
    Mark Tepterev
    Oversun

    ~
    ~
    Moved  P.P.P.S.
    Q from Brian Wong:
    ----- Original Message -----
    From: "Brian Wong"
    To: <[email protected]>
    Sent: Thursday, March 06, 2014 9:24 AM
    Subject: Re: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled
    On 3/6/2014 1:20 AM, Brian Wong wrote:
    > The kernel is built with the full set of Hyper-V drivers, including the
    > key "Microsoft Hyper-V Balloon Driver" as well as memory hot-add and
    > hot-remove functionality. This is happening with both the Gentoo-patched
    > 3.10.32 kernel and the vanilla 3.12.5 kernel. The host machine has a
    > total of 24 GB of memory.
    >
    > For now, I am working around the issue by starting the VM with the
    > startup memory set to the maximum and letting Hyper-V take the usused
    > memory back when it is not in use. The VM will then get the extra memory
    > when it needs it.
    >
    > Have I encountered a bug in the Hyper-V balloon driver?
    >
    Just a correction: the vanilla kernel version is 3.13.5, not 3.12.5.
    Sorry for any confusion.
    Brian Wong
    http://www.fierydragonlord.com
    ----- Original Message -----
    From: "Brian Wong"
    To: <[email protected]>
    Sent: Thursday, March 06, 2014 9:20 AM
    Subject: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled
    I'm new to LKML, so please don't be too hard on me :)
    I'm running Gentoo Linux under Microsoft Client Hyper-V on Windows 8.1
    Pro, and I've noticed some odd behavior with respect to dynamic memory
    (aka memory ballooning). The system will never use more than the startup
    memory defined in the virtual machine's settings.
    ( VVM: typewriting error viRtual fixed by me, for best search in future )
    For example, if I set the startup memory to 512 MB, and enable dynamic
    memory with a minimum of 512 MB and a maximum of 8192 MB, the system
    will never allocate than 512 MB of physical memory, despite Hyper-V
    assigning more memory to the VM and the added memory being visible in
    the output of "free" and "htop". Attempting to use more memory causes
    the system to start paging to swap, rather than actually allocating the
    memory above the startup memory assigned to the VM.
    The kernel is built with the full set of Hyper-V drivers, including the
    key "Microsoft Hyper-V Balloon Driver" as well as memory hot-add and
    hot-remove functionality. This is happening with both the Gentoo-patched
    3.10.32 kernel and the vanilla 3.12.5 kernel. The host machine has a
    total of 24 GB of memory.
      Brian Wong wrote On 3/6/2014 1:20 AM:
     Just a correction: the vanilla kernel version is 3.13.5, not 3.12.5. )
    For now, I am working around the issue by starting the VM with the
    startup memory set to the maximum and letting Hyper-V take the usused
    memory back when it is not in use. The VM will then get the extra memory
    when it needs it.
    Have I encountered a bug in the Hyper-V balloon driver?
    Brian Wong
    http://www.fierydragonlord.com
    ----- Original Message -----
    From: "Victor Miasnikov"
    To:  [email protected]; "Brian Wong"
    Cc: "Abhishek Gupta (LIS)" ( zzzzzzzzzzzzzzz (at) microsoft.com>; "KY Srinivasan" zzzzzzzzzzz (at) microsoft.com
    Sent: Thursday, March 06, 2014 1:07 PM
    Subject: Re: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled RE: [PATCH 2/2]
    Drivers: hv: balloon: Online the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    Hi!
     Short:
     Question to Linux kernel team:
    may be patch
    >>> [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context"
    can solve problems with dynamic memory hot add in Hyper-V VMs with Linux OS ?
     Full:
    BW>> .., if I set the startup memory to 512 MB, and enable dynamic
    BW>> memory with a minimum of 512 MB and a maximum of 8192 MB,
    BW>>  the system will never allocate than 512 MB of physical memory
    BW>>
    BW>> Have I encountered a bug in the Hyper-V balloon driver?
    BW>>
     Unfortunately,  It's long story . . . :-(
    a)
     I already ( on January 09, 2014 2:18 PM )  write about problems with "Online the hot-added memory"  in "user space" see
    P.P.S.
    b)
      See
    Bug 979257 -[Hyper-V][RHEL6.5][RFE]in-kernel online support for memory hot-add
    https://bugzilla.redhat.com/show_bug.cgi?id=979257
     (  Info from this topic may be interessant not only for RedHat users )
    b2)
     Detail about pathes related problem "Online the hot-added memory"  in "user space" :
    >>> [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context"
    >>>
    >>>
    >>> === 0001-Drivers-base-memory-Export-functionality-for-in-kern.patch
    >>>  . . .
    >>> +/*
    >>> + * Given the start pfn of a memory block; bring the memory
    >>> + * block online. This API would be useful for drivers that may
    >>> + * want to bring "online" the memory that has been hot-added.
    >>> + */
    >>> +
    >>> +int online_memory_block(unsigned long start_pfn) {  struct mem_section
    >>> +*cur_section;  struct memory_block *cur_memory_block;
    >>>
    >>>  . . .
    >>> ===
    >>>
    >>>
    >>> ==
    >>>  . . .
    >>> == 0002-Drivers-hv-balloon-Online-the-hot-added-memory-in-co.patch
    >>>   . . .
    >>>    /*
    >>> -   * Wait for the memory block to be onlined.
    >>> -   * Since the hot add has succeeded, it is ok to
    >>> -   * proceed even if the pages in the hot added region
    >>> -   * have not been "onlined" within the allowed time.
    >>> +   * Before proceeding to hot add the next segment,
    >>> +   * online the segment that has been hot added.
    >>>     */
    >>> -  wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ);
    >>> +  online_memory_block(start_pfn);
    >>>
    >>>   }
    c)
      Before apply patches ( see in P.S. about native udev-script) we are need use one of this methods:
     [ VVM:   URL of this topic skipped ]
    c1)
    "/bin/cp method" by Nikolay Pushkarev :
    Following udev rule works slightly faster for me (assuming that memory0 bank always in online state):
    SUBSYSTEM=="memory", ACTION=="add", DEVPATH=="/devices/system/memory/memory[1-9]*",
    RUN+="/bin/cp /sys$devpath/../memory0/state /sys$devpath/state"}}
    ( VVM : of course all need be place in one line, 2 line in this msg. -- only for good visual formating reasons )
    c2)
    udev rule using "putarg" by Nikolay Pushkarev :
     Even "/bin/cp method" udev rule work time to time not as need :-(
    As a result, Nikolay Pushkarev write a program putarg.c, that is even faster than "/bin/cp method" :
    ==
    #include <stdio.h>
    #include <fcntl.h>
    #include <sys/ioctl.h>
    #include <string.h>
    int main(int argc, char** argv) {
      int i, fd;
      if (argc < 2) return 0;
      if ((fd = open(argv[1], O_RDWR)) < 0) return 1;
      for (i = 2; i < argc; i++) {
        if (write(fd, argv[i], strlen(argv[i])) < 0) {
          close(fd);
          return i;
      close(fd);
      return 0;
    ==
     The first argument - the name of the output file ,
    and argument number 2 ( and all subsequent (  if exist ) ) - are text that are wiil be written in output file.
     Compile source code to executable file by run command:
    gcc -o putarg -s putarg.c
    The resulting binary need be placed an accessible location , such as /usr/bin, or wherever you want.
    Now udev rule using "putarg" can be written as :
    SUBSYSTEM=="memory", ACTION=="add", RUN+="/usr/bin/putarg /sys$devpath/state online"
    This complex solutions ( compiled file and udev-rule ) works exceptionally fast.
    Best regards, Victor Miasnikov
    Blog:  http://vvm.blog.tut.by/
    P.S.
    Nikolay Pushkarev about standart udev-script :
    Strange, that the native udev-script
    SUBSYSTEM=="memory", ACTION=="add", ATTR{state}="online"
    triggered somehow through time ( VVM: very often not work as need )
    P.P.S.
    ----- Original Message -----
    From: "Victor Miasnikov"
    To: "Dan Carpenter"; "K. Y. Srinivasan" ; <[email protected]>
    Cc: "Greg KH" ; <[email protected]>; <olaf (at) aepfle.de>; ""Andy Whitcroft"" <zzzzzzzzzzzz (at)
    canonical.com>;
    <jasowang (at) redhat.com>
    Sent: Thursday, January 09, 2014 2:18 PM
    Subject: RE: [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    Implement the file copy service
    Hi!
    > Is there no way we could implement file copying in user space?
      For "file copy service"  "user space"  may be pretty good
    But I ( and other Hyper-V sysadmin)  see non-Ok ( in "political correct" terminalogy) results with "hv: balloon: Online
    the hot-added memory" in "user space"
    ==
     [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context"
    ==
     What news?  Roadmap?
    Best regards, Victor Miasnikov
    Blog:  http://vvm.blog.tut.by/
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ----- Original Message -----
    From: "KY Srinivasan"
    To: "Victor Miasnikov"; [email protected]; "Brian Wong"
    Cc: "Abhishek Gupta (LIS)"
    Sent: Thursday, March 06, 2014 1:23 PM
    Subject: RE: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled RE: [PATCH 2/2]
    Drivers: hv: balloon: Online the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    > -----Original Message-----
    > From: Victor Miasnikov
    > Sent: Thursday, March 6, 2014 3:38 PM
    > To: [email protected]; Brian Wong
    > Cc: Abhishek Gupta (LIS); KY Srinivasan
    > Subject: Re: Linux does not use more than the startup RAM under Hyper-V
    > with dynamic memory enabled RE: [PATCH 2/2] Drivers: hv: balloon: Online
    > the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    >
    Victor,
    I will try to get my in-context onlining patches accepted upstream.
    K. Y

  • While creating DB using DBCA getting ORA-27102: out of memory in Linux

    Hi All,
    I am working on 11.2.0.3 oracle Redhat linux. I am getting error "ORA-27102: out of memory" while creating a new database using dbca
    Below are the DB ans OS details. Please check it and let me know what i need to do to overcome this issue.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    $uname -a
    Linux greenlantern1a 2.6.18-92.1.17.0.1.el5 #1 SMP Tue Nov 4 17:10:53 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
    $cat /etc/sysctl.conf
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    kernel.shmall = 4294967296
    kernel.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 4194304
    net.core.wmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_max = 1048576
    fs.file-max = 6815744
    fs.aio-max-nr = 1048576
    net.ipv4.ip_local_port_range = 9000 65500
    $free -g
    total used free shared buffers cached
    Mem: 94 44 49 0 0 31
    -/+ buffers/cache: 12 81
    Swap: 140 6 133
    $ulimit -l
    32
    $ipcs -lm
    Shared Memory Limits
    max number of segments = 4096
    max seg size (kbytes) = 4194303
    max total shared memory (kbytes) = 8388608
    min seg size (bytes) = 1
    Also created a trace file under trace loction and it suggesting to changes shm parameter value. but i am not sure which parameter (shmmax or shmall) and value i need to modify.
    below are trace file info
    Trace file /u02/app/oracle/diag/rdbms/beaconpt/beaconpt/trace/beaconpt_ora_9324.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /u02/app/oracle/product/11.2.0.3
    System name: Linux
    Node name: greenlantern1a
    Release: 2.6.18-92.1.17.0.1.el5
    Version: #1 SMP Tue Nov 4 17:10:53 EST 2008
    Machine: x86_64
    Instance name: beaconpt
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Unix process pid: 9324, image: oracle@greenlantern1a
    *** 2012-02-02 11:09:53.539
    Switching to regular size pages for segment size 33554432
    Switching to regular size pages for segment size 4261412864
    skgm warning: ENOSPC creating segment of size 00000000fe000000
    fix shm parameters in /etc/system or equivalent
    Please let me what are the kernel parameter values i need to chage to work this.
    Thanks in advance.

    Yes it is same question, but i didn't have any solution there and still looking for some help. the solution it was provided in the last post is not working and getting the same error even with less thn 20% of memory. Please let me know how to overcome this issue.
    Thanks

  • R9.2.0 on RH 7.3: Memory problem: Linux or the Installer?.

    Having the following problem. Perhaps someone had the same. In that case, please help.
    I have P4 with 1GB of RAM and 2GB of SWAP. No disk space problem. No shared memory problem. I installed the Client of Oracle9i (9.2R2) on redhat 7.3 (standard installation as Server). I have already noted here that with the Oracle installer running, the system uses almost all the RAM available (99.5% of the total), but at least it can conclude the installation. However it never uses the SWAP (100% free).
    When tried to install the Database (enterprise or standard), the installation continues as long as there is RAM available. When reached 99.6% of the total used, it does not give any response (it hangs at 41% of the installation). Even here no SWAP portion is used.
    Is there a problem of the kernel dispached with Rh7.3 in using the swap? does Oracle Installer really needs so much RAM? or am I missing something in the installation process of the Database? I am using jdk-1.3.1_04 downloaded from the sun website.
    I would appreciate if you cc your response also to [email protected] I may have problem to access the Forum in the coming two days.
    Thank you in advance.
    Jama Musse Jama.

    Check the Support Matrix (if you can find it) and it will tell you that Redhat and most Linux are not supported. I bought Redhat 7.3 Professional , a WinBook with lots of disk and speed and tried to get 9.2 to run on it. no way. It's not certified so it's not supported. Even though I explained that what I was trying to do was get rid of Windows XP and move to RedHat 7.3 and would they send me Oracle 9.2 when it was available. I got the CDs 3 days later. but it won't work. Now I am left with the choise of buying the 9AS package or going backt to XP.
    I do not thing that Oracle is living up to the spirit of LINUX. They say Linux but they mean a very specific subset. Who do they think they are, Bill Gates?

  • Memory leak linux

    I'm running Ubuntu Linux and it looks like I see a memory leak that I'm not seeing when I ran it on my PC.
    I get heap exception everntually. I was using sam maxm heap size of 500M.
    Is there anything I have to do special for linux?
    Thanks

    morgalr wrote:
    Yes, my crystal ball says to fix the leak you observe in Linux, and it will fix any potential problem you have not viewed in Windows yet.(chuckle) Your comment expresses something that flicked through my mind even reading the title. The thought that flicked through my mind, did not quite have the eloquence of your words.

  • Script to collect cpu,io & memory usage linux

    Hi
    is there a basic script that collects cpu,memory,i/o network stats for linux on hourly basis?
    i need it to be in a format that I can easily import into ms excel and display as graph,
    I dnt want any 3rd party software because I rather not to install anything on the server. I prefer a shell script that can be scheduled in cron.
    thank you

    There is no script that analyzes what is going on on a Linux server and provide you with straight and easy to understand information or a reasonable overall performance overview. Tools to monitor performance exist for the purpose of analyzing and troubleshooting, but the information is not suitable for reporting efficiency or to predict resource requirements.
    You might find some previous messages regarding this topic useful:
    Linux Performance Monitoring Scripts
    CPU and Memory  Status for Linux
    Perhaps the nmon analyzer is what you are looking for, which has an option to create MS Excel graphs: http://nmon.sourceforge.net/pmwiki.php

  • PC Suite 7.0 memory hungry?

    Yesterday I installed PC Suite 7.0 (Windows) so I can make a backup once in a while.
    When looking at my running processes I found out the total suite uses about 40MB (PCSuite.exe 25MB, PcSync2.exe 15MB) with my phone disconnected and actually doing nothing. As far as I remember this is a lot more then PC Suite 6 used to use so it seems this goes into the wrong direction.
    Is this normal behavior? I'd say the same task (sit and wait) can be done by about 0.5MB.
    Is there any chance Nokia developers split up the software to keep the memory abuse a bit less while sleeping? I love the slick artwork* when actually using the software but when it doesn't I'd prefer to use my precious resources for more productive tasks.
    (* No I don’t, but I know you do.)
    To people who don't care or don't know I'd say:
    Ever noticed your PC is running slower after a few months. That's mainly because all devices come with their own software and they all want some memory.
    Even when memory is less expensive these days your PC will write more to disk which makes it slower and runs faster out of battery power when using a laptop.
    There's no need to use this space. It's just because all developers have fast PC's and a bucket of free memory in the corner, or because companies use those little icons as free advertisement space.

    I spoke to a friend last night and this is what he said:
    [Quote]The 270MB used on C: exclusively is most likely because it is used as a sandbox (an area that is stable and unaffected/doesn't affect the system). C: would be used because it is the most stable location (the system drive is not available for format through [the current] Windows installation). My guess is the allocated space is integral to the PC Suite software (most likely PIM, etc.). The %TEMP% folder would be a ridiculous place to store such synchronisations as it's prone to cleanup and/or deletion. Similarly %SYSTEMROOT% would be an unsafe bet as it's never a good idea to interfere with OS level stuff. This leaves the most likely option as %SYSTEMDRIVE%, which is readily accessible as an environment variable (Start -> Run -> cmd [enter] -> echo %SYSTEMDRIVE%)
    System vars need never be called through API calls; that's usually reserved for things like registry reads/writes
    API calls will be natural throughout the entire Nokia Suite. It installs drivers (DLLs) which are accessed through C-based API calls for communication with the device.[EndQuote]
    Hope this clears things up ??
    Message Edited by gadgetdude on 15-Oct-2008 10:44 AM
    Remember to mark all correctly answered questions as Solved. A forum is only as great as the sum of its parts, together we will prevail.

  • Why is InDeisgn so (memory) hungry?

    Hi there.
    I am working on a document that is 16 pages long, with only text- no graphics, but Task manager says that Indesign is eating up 421MBs of Memory. I have 2 other documents open, but they are both one page long, with only a bit of text. Whenever I try doing anything a little taxing it freezes for a moment
    I have Windows 7 sp1, 64-bit, processor: 2.6gb (Athlon), 6gb RAM.
    What can I do, that InDesign will stop taking so much memory?
    Any help would be greatly appreciated.

    I disagree with your comments about the RAM. Extra RAM allows for more applications to be open at the same time. If you stripped out your RAM and went to 4gb of RAM you'd have the same laggy experience switching between apps or having more apps open.
    Anyway - we both agree the computer isn't great and the only thing upgradable is the RAM - so other than buying a new computer that really is the only option, upgrade the RAM.
    Who knows there could be DDR2 Ram installed and that processor supports DDR3.
    Video card might be responsible too - but if it's integrated there's little you can do.
    I'd still look for firmware upgrades for the BIOS and Processor first - this is free
    2ndly I'd look at the RAM and try to boost that - this is cheap
    3rdly I'd look at getting a new computer - more expensive.

  • ORA-01092 - could it be not enough memory? Linux

    I installed Oracle9.2 on my RedHat Linux8. I only couldn't install dbca. I also was able to run SQLPLus.
    When I run dbca it gives me warning saying and showing that I don't have enough memory, but allows to proceed when I run it I get error saying 'ORA-01092 instance terminated Disconnection forced', another error says 'write error' What does it mean? How to solve this problem?
    Note: I set enough memory in /etc/sysctl.conf with kernel.shmmax=1073741824 which should be enough.
    But warning message says that I don't have that much memory?!
    Appreciate any help

    Hi,
    Try This..
    Check that the database is completely down:
    a.- Shutdown all the instance proccess
    ps -ef |grep SID kill -9 PID
    b.- Remove the following files if they exist: $ORACLE_HOME/dbs/sgadefSID.ora $ORACLE_HOME/dbs/lknSID
    c.- Check that there is sufficient space and privilegies on:
    -- Traces and Logs Directories
    -- InitSID.ora
    d.- Shared memory segments and semaphores associated with this instance must be released:
    ipcs -s --> List Semaphores taken by user ipcs -m --> List Shared Memory segments taken by user
    Yogi

  • Memory Randomization - Linux default configuration

    Hi,
    lately I've been wondering if the Linux kernel has any ASLR (Address Space Layout Randomization) enabled by default. I know that PaX and grsecurity are in vanilla, but I've also read that by enabling these you will run into problems with X, MPlayer etc. Considering that exploits are made much, much harder when the bad guy doesn't know where his code is located in the heap, I presume that it would be worthwhile to use this technology as many of the modern exploits especially target applications the user uses to interact with the internet.
    On a similar note, is the NX bit being used by default?
    So, what's the current status and what does the near future look like?
    Edit: Of course I meant the heap, not stack. Fixed, thanks dyscoria.
    Last edited by tkdfighter (2009-03-27 14:06:36)

    ]So I did some reading on Wikipedia. grsecurity actually bundles PaX. Also, since version 2.6.12, the kernel has a weak form of ASLR enabled by default, as does OS X. Windows Vista has a more complete implementation. Reading this, it appears that the weak OS X implementation is not really sufficient. Miller doesn't really make a statement about Linux, but I assume you could argue that the same goes for Linux.
    I can see though that PaX is not in vanilla, contrary to what I first thought, and that it doesn't support the most recent kernels.
    Another question: why isn't there any protection for simple fork bombs in Arch by default? There is no distribution I know of that has nproc set in limits.conf by default. Some basic things like this would be kind of nice, as I'm sure that there are alot of trivial settings to improve security I and other users don't know about.

  • Question regarding memory on linux platforms

    Customer recently upgraded from OCS 9i to 10g. The mail store database for OCS is a RAC 10.1.0.5 two node database. I have questions about which method is best to use to have a SGA bigger than 1.7GB on Linux x86 servers.
    On the new OCS 10g mail store database, HWm enqueues are found to impact
    database performance while the mail store receives very large quantities of
    email in a short time. Oracle support and Enterprise Manager's ADDM has
    recommended increasing the size of the SGA to help reduce HWm enqueues.
    The max SGA size is currently 1648MB.
    Servers are Red Hat 4 update 5. Each server in the two node RAC has 16GB of
    RAM, with 4 processors (8 cores on each server). The kernel is the 2.6 SMP
    kernel: Linux prcodb01.its.calpoly.edu 2.6.9-55.0.9.ELsmp #1 SMP Tue Sep 25 02:
    17:24 EDT 2007 i686 i686 i386 GNU/Linux
    Here are customer questions:
    Oracle states that 1.7GB is the current limit with my systems current
    configuration. When the Oracle documents reference 1.7GB, can I safely assume
    that 1.7GB = 1740MB? I'd be safe using a max SGA size of 1739MB?
    I've read Metalink notes 260152.1 and 329378.1. Looking at options listed in
    260152.1, two options are listed for Red Hat 4.0, "configuration 3" and
    "configuration 4" which required use of the hugemem kernel. Besides the
    difference in the SGA sizes available between those two options, are there any
    drawbacks from choosing one option over the other? Is there any drawback in
    using the hugemem kernel over the plain smp kernel?

    Hi,
    For that question, Is there any drawback in using the hugemem kernel over the plain smp kernel ?
    Please refer to NOTE 264236.1
    Regards
    Jason

  • Why is FF 5.0 so memory hungry? It is barely functional, and then not for long.

    On a 6 month old Acer that was functioning ''perfectly'' with the very latest 3.6.whatever version, it is now almost nonfunctional.
    It is slow beyond belief. The "swirly re-freshing" symbol has moved into my house permanently.
    I can barely open an IE window while FF 5.0 is running. If I can, it takes forever.
    The entire screen freezes for periods of time while the "swirly thingy" insults me.
    The entire screen will fade out if you get impatient & click something while the swirly thing is busy.
    When the screen fades out, you're toast. You have to close FF with task manager, and you CANNOT retrieve any of your tabs when you bring it back up, because the 3 tabs that are supposed to work in the history bar, like Restore Previous Session are faded out & non-functional.
    I am a professional sports writer. I have not been able to write an article in 3 days due to losing all my tabs when I upgraded, then losing them all again every day.
    There is something wrong with this upgrade. My CPU cannot deal with it.
    What's wrong?
    EDIT:
    There is a box titled "Troubleshooting Information" on the page I wrote this query on.
    Below the box it says "Copy and paste the information from Help>Troubleshooting Information.
    Guess what? That line is not a clickable link. When I pasted Help > Troubleshooting Information into the FF search it took me to a page with more options than fleas on a hound dog.
    When I "paged back", I got what I expected, everything I had typed in this entire page except the question at the top was gone.
    So far, everything that could possibly go wrong with this upgrade has happened. What's next?

    Hi Brett,
    I got the same issue and I am very frustrated about it.
    Its the same thing I experienced in Lightroom 4 and its still not fixed. I cant use the book module because of that. So BLURB is not getting my money. :-(
    I am waiting for that fix since more than a year, but Adobe says its a Problem which appears only for a small group of users.
    HA HA just do a Google search!!!
    System:
    Lenovo W520
    Win 7 64Bit
    i7-2760 @ 2.4 GHz
    20 GB Ram
    1st HDD SSD 256GB         --> Catalog
    2nd HDD HGST 1.5TB        -->Pictures
    Nvidia Quadro 2000m

  • Linux memory ballooning problem

    Hi all, my customer has a MS HyperV 2012 (not R2) cluster for virtualization purposes.  Stumbled in dynamic memory allocation I just had a try, reading around about supported configurations etc. As per what i understand, for a 2012 environment the only
    concern is to set identical startup and maximum ram values.
    Once the vm is booted, after a while, the ram amount start to decrease, meanwhyle in terminal some "out of memory kill process" starts and the amount returns into the original max value.
    Any idea or experience about that scenario? The RHEL release is 6.4, integration services embedded.
    Tanks in advance.

    ~
    ~
    . . . , the udevs ones is really interesting. 
    ~
    ~
     But use rules by
    Nikolay Pushkarev :
    RHEL
    6.5 / CentOS 6.5 ; RHEL 6.4 / CentOS 6.4 include support for Hyper-V drivers
    ==
    Following rule works slightly faster for me (assuming that memory0 bank always in online state):
    SUBSYSTEM=="memory", ACTION=="add", DEVPATH=="/devices/system/memory/memory[1-9]*", RUN+="/bin/cp /sys$devpath/../memory0/state /sys$devpath/state"
    And one more thing, udev solution doesn't work in 32-bit kernel architecture, only for 64-bit. Is this by design or yet another bug?
    ==
    P.S.
    And see topic
    Dynamic Memory on Linux VM

Maybe you are looking for

  • Pages: how can I see the structure of a document?

    Hi, I'm a new user, I just bought my first mac last week. I have a problem with Pages. How can I see the structure (organization) of the document I'm working on? I mean headings, headings 2, etc? In Microsoft Word you see it on the left. I don't mean

  • My iPhone4 keeps restarting and now I can't even restore it. HELP!

    My 2 month old iPhone4 keeps restarting and after trying to restore it its absolutely useless becauseit wont stay on long enough to restore. My entire business is on this phone and I would really prefer to not have to send it back to Apple.

  • Problems with creating index

    I have over 1500MB free in tablespace I use for indexes. When I attempt to create index which should have about 300MB it ends with error ORA-01652 unable to extend temp segment in tablespace where my indexes reside. Why Oracle needs so much space to

  • Alert monitor for long running background jobs

    Hello, I have to configure an alert moniter for long running background jobs which are running more than 20000 secs using rule based. I have created a rule based MTE and assigend MTE class CCMS_GET_MTE_BY_CLASS to virtual node but i dont find a node

  • VNI-2015

    When I submit a backup using OEM, I get error VNI-2015 and the description is authentication error. I have given SYS as SYSDBA in preferred credentials. Help please........