Gnome pulseaudio using lots of shared memory

Out of curisoty I did the following ls -l /dev/shm and I saw the following files:-
-r-------- 1 peter users 67108904 Mar 23 10:40 pulse-shm-3021221787
-r-------- 1 peter users 67108904 Mar 23 10:24 pulse-shm-3538772094
-r-------- 1 peter users 67108904 Mar 23 12:27 pulse-shm-3584709017
-r-------- 1 peter users 67108904 Mar 23 10:24 pulse-shm-419282190
Just wondering why pulseaudio needs such large files? Is it possible to reduce this?
Many thanks in advance

Hello,
Thanks for the Post. This post helped me to solve my production issue.
My production application started hanging suddenly and no clue why it is behaving like this.
The symptom was FNDLIBR processes were on top cpu consumer on application server.
The database server also experienced heavy system i/o and cpu usage.
The database base log switches every minutes (500 MB) and there are no concurrent requests running.
No concurrent request errors. No OS errors (Cleaned the GNOME and restarted). No database errors.
But in sql area i saw lot of fnd service requests keep coming.
At the end the application server hangs because of high cpu and it caused the production down.
The problem was one of Our System Administrator disabled one responsibility with scheduled request.
One interesting thing we noticed, even you put the request in question Hold, it will start running while you restart the concurrent request.
You have to cancel the concurrent request and restart the concurrent manager can solve the issue.
Thx
Pouler

Similar Messages

  • Concurrent manager using lots of cpu & memory

    In the last few days, I have noticed the load on the production server as follows:
    load averages: 4.66, 4.86, 4.92 14:34:16
    333 processes: 327 sleeping, 6 on cpu
    CPU states: % idle, % user, % kernel, % iowait, % swap
    Memory: 32G real, 7591M free, 23G swap in use, 14G swap free
    PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
    20045 appsdapo 1 0 0 4073M 4029M cpu/2 165.0H 12.49% f60webmx
    1854 appsdapo 1 10 0 4082M 4060M cpu/0 240.0H 12.48% FNDLIBR
    2504 appsdapo 1 10 0 1872M 1869M cpu/19 1980.2 12.46% FNDLIBR
    2496 appsdapo 1 0 0 1876M 1873M cpu/1 1957.5 11.68% FNDLIBR
    18641 appsdapo 1 10 0 1512M 1509M cpu/16 1080.3 8.64% FNDLIBR
    I have come across Metalink note ID 114380.1 which says identify the C.M but how do I relate the FNDLIBR processes above with the specific C.Managers.
    Thanks

    Hello,
    Thanks for the Post. This post helped me to solve my production issue.
    My production application started hanging suddenly and no clue why it is behaving like this.
    The symptom was FNDLIBR processes were on top cpu consumer on application server.
    The database server also experienced heavy system i/o and cpu usage.
    The database base log switches every minutes (500 MB) and there are no concurrent requests running.
    No concurrent request errors. No OS errors (Cleaned the GNOME and restarted). No database errors.
    But in sql area i saw lot of fnd service requests keep coming.
    At the end the application server hangs because of high cpu and it caused the production down.
    The problem was one of Our System Administrator disabled one responsibility with scheduled request.
    One interesting thing we noticed, even you put the request in question Hold, it will start running while you restart the concurrent request.
    You have to cancel the concurrent request and restart the concurrent manager can solve the issue.
    Thx
    Pouler

  • Database consuming lot of Physical memory

    Hi ,
    My database is on version 11.1.0.7.0 and on SUN SOLARIS SPARC.
    My server admin just informed me that my database is using lot of physical memory , which i understand is RAM.
    I am looking on google also but i am not able to find a way in which i can check on it and see how it can be controlled.
    Any help/suggestion would be highly appreciated.
    Regards
    Kk

    There are 2 basic methods that Oracle uses memory.
    Statically. Oracle allocates memory (for the SGA) when it starts. This memory remains fixed in size.
    Dynamically. In order to service a client, memory is needed for that client session. Oracle dynamically allocates memory for such sessions (called the PGA).
    When Oracle memory consumptions grows, it must be dynamically allocated memory. Static memory is just that - static. It does not grow in size.
    The usual reason for PGA memory consumption to grow is incorrectly designed and coded bulk processing. A single Oracle server process can easily consume all available free memory on the server as Oracle dynamically increases the size of the PGA of the process running the flawed PL/SQL code.
    However, one should not be looking at o/s command line commands to determine Oracle processes's memory utilisation. The output of such commands are often incorrectly interpreted.. as shared memory can be (and often is) included to provide a process's memory utilisation. There are notes on Metalink (mysupport.oracle.com) on the topic and how to correctly use CLI commands to view Oracle process memory utilisation.
    An easier, and more accurate, view of Oracle memory utilisation can be obtained from Oracle's virtual performance views.
    So, a sysadmin e-mailing a ps (Unix/Linux process listing) showing a particular Oracle process "+using too much memory+" is not really solid enough evidence that memory is being abused. One needs to look closer at the type of memory used by the process.

  • ORA-27100: shared memory realm already exists

    Dear All,
    after increase the SGA size from 2GB to 4GB i have found the problem.
    physical memory of the server-8GB
    after change the SGA size when i start the database , i get the message.
    it is our production server.
    Can anybody help us.
    Regards,
    sanjay

    sanjay kumar roy wrote:
    after increase the SGA size from 2GB to 4GB i have found the problem.
    physical memory of the server-8GB
    after change the SGA size when i start the database , i get the message.
    it is our production server.Operating system?
    This error in my experience on Linux/Unix means that the shared segment (SGA) has not been correctly released/closed/destoyed - and according to the kernel this shared memory still exists.
    This shared memory has to be removed in order for the instance startup to (re-)create the SGA.
    The command to use to check shared memory is ipcs -m. E.g.
    /usr/lib/oracle/xe> ipcs -m
    ------ Shared Memory Segments --------
    key        shmid      owner      perms      bytes      nattch     status  
    0x0966f45c 26345537   oracle     640        148897792  22         Assuming that the database instance is indeed not running (no processes left), the shared memory segment can be removed using "+ipcrm+". E.g.
    /usr/lib/oracle/xe> ipcrm -M 0x0966f45cObviously, do NOT remove the SGA when there are still database instance processes attached to it. Make very sure that ALL processes for that instance have terminated.

  • Cannot attach data store shared-memory segment using JDBC (TT0837)

    I'm currently evaluating TimesTen during which I've encountered some problems.
    All of the sudden my small Java app fails to connect to the TT data source.
    Though I can still connect to the data source using ttisql.
    Everything worked without problems until I started poking around in the ODBC administrator (Windows 2K).
    I wanted to increase permanent data size so I changed some of the parameters.
    After that my Java app fails to connect with the following message:
    DriverManager.getConnection("jdbc:timesten:direct:dsn=rundata_tt60;OverWrite=0;threadsafe=1;durablecommits=0")
    trying driver[className=com.timesten.jdbc.TimesTenDriver,com.timesten.jdbc.TimesTenDriver@addbf1]
    SQLException: SQLState(08001) vendor code(837)
    java.sql.SQLException: [TimesTen][TimesTen 6.0.4 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 8846, procedure "sbDbConnect()"
    The TT manual hasn't really provided any good explanation what the error code means.
    Obviusly I'v already tried restoring the original ODBC parameters without any luck.
    Ideas..anyone?
    /Peter

    Peter,
    Not sure if you have resolved this issue or not. In any case, here are some information to look into.
    - On Windows 32-bit, the allocation of shared data segment doesn't work the same way like on Unix and Linux. As a result, the maximum TimesTen database size one can allocate is much smaller on the Windows platform than on other platforms.
    - Windows error 8 means ERROR_NOT_ENOUGH_MEMORY: not enough storage is available to process this command.
    - TimesTen TT0837 says the system was unable to attach a shared memory segment during a data store creation or data store connection operation.
    - What was the largest successful perm-size and temp-size you used when allocating the TimesTen database?
    * One explanation for why you were able to connect using ttIsql is that it doesn't use much of the DLLs, whereas your Java application typically has a lot more DLLs.
    * As a troubleshooting step, you can try reduce your Temp-size to a very small size and just see if you can connect to the data store. Eventually, you may need to reduce your perm-size to get Windows to fit the shared data segment in the process space.
    By the way the TimesTen documentation has been modified to document this error as follows:
    Unable to attach to a shared memory segment during a data store creation or data store connection operation.
    You will receive this error if a process cannot attach to the shared memory segment for the data store.
    On UNIX or Linux systems, the shmat call can fail due to one of:
    - The application does not have access to the shared memory segment. In this case the system error code is EACCESS.
    - The system cannot allocate memory to keep track of the allocation, or there is not enough data space to fit the segment. In this case the system error code is ENOMEM.
    - The attach exceeds the system limit on the number of shared memory segments for the process. In this case the system error code is EMFILE.
    It is possible that some UNIX or Linux systems will have additional possible causes for the error. The shmat man page lists the possibilities.
    On Windows systems, the error could occur because of one of these reasons:
    - Access denied
    - The system has no handles available.
    - The segment cannot be fit into the data section
    Hope this helps.
    -scheung

  • Hello, I have two questions on time capsule  I can only have it on my external hd files and free up my internal memory to my mac  I can use an external hard drive, in my case a lacie rugged as shared memory for my two computers

    Hello, I have two questions on time capsule  I can only have it on my external hd files and free up my internal memory to my mac  I can use an external hard drive, in my case a lacie rugged as shared memory for my two computers

    I have a mackbook pro and an iMac if I buy a time capsule 2tb airport, I can use it with time machine and what would be the best way to use it.
    There is no particular setup required for TM.. both computers will create their own backup sparsebundle which is like a virtual disk.. Pondini explains the whole thing if you read the reference I gave you.
    and how to use time capsule airport whit other external hd to use my old lacie airport with the new time capsule
    Up to you.. you can plug the external drive into the TC and enjoy really slow file transfers or you can plug it into your computer and use it as external drive.. which is faster than the TC.. and TM can include it in the backup.
    Again everything is explained in the reference.. you are not reading it.

  • How do debug an application in Shared Memory debug mode using JDev

    I don't see the option of using "Shared memory" debug mode in JDev. There is only socket debug option(Attach/Listen) in debugger but no shared memory debug option.
    Is it missing or is it hidden somewhere.
    Can anyone let me know as all IDEs provide that.

    Any updates?

  • 10.10.1  I use sysctl.conf to increased shared memory.  10.10.1 does not read sysctl.conf anymore

    I need to increase shared memory.  In 10.10  is use /etc/sysctl.conf with a line   kern.sysv.shmmax=16777216 and this works.  Also in 10.9.  But in 10.10.1 shared memory can no longer be increased from the default 4194304 by this means.
    The question is simple:   how in 10.10.1 does one increase kern.sysv.shmmax  in a way that survives a re-boot

    I have the same problem on my windows 2003 server but the
    thing is that I have not been able to get the same error on any
    other machine including a windows 2003 I installed on VMWare on the
    same machine. I have narrowed down to where in the encoder this
    happens. The encoder uses page file backed file maps to register
    session information. The call to MapViewOfFileEx method of the
    Win32 API fails with the "access denied" error. Whatever it is,
    it's due to some defect in the Windows installation I have because
    I do not get that message on any other windows installation even
    the ones I do with the same windows CD !
    I am still trying to find out what the defect might be but I
    thought you should know it's got nothing to do with you Windows
    make and model...it happens for me on 2k3 and for you on XP...so
    you should looke elsewhere..

  • Cannot attach data store shared-memory segment using JDBC (TT0837) 11.2.1.5

    Hi,
    I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
    I encounter this issue in Windows XP, and application gets connection from jboss data source.
    url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
    username=test
    password=test
    Error information:
    java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
    shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
    at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
    at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
    at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
    at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
    at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
    at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
    at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
    I am confused that if I use jdbc, there is no such error.
    Connection conn = DriverManager.getConnection("url", "username", "password");
    Regards,
    Nesta

    I think error 8 is
    net helpmsg 8
    Not enough storage is available to process this command.
    If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
    You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
    "Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
    A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
    As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
    You can use tools like the free "Process Explorer" to see the used address ranges in your process.
    Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses."

  • Using Shared Memory in LabVIEW

    I'm trying to use shared memory with LabVIEW. Can I use it, a DLL in C with LabWIEW for use shared Memory?

    Lidia,
    Check these out (for memory mapping):
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000006A1D0000&UCATEGORY_0=_318_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=build+cvi+shared+dll&USEARCHCONTEXT_QUESTION_S=0
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000005BC10000&UCATEGORY_0=_49_%24_6_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=Communicating+Between+Built+LV+App&USEARCHCONTEXT_QUESTION_S=0
    But in general you don't need to use this when you use dll's. It is used to
    share data between different processes. If you need LabVIEW data in a dll,
    try to pass it as a pointer to an array, or as a string pointer.
    Regards,
    Wiebe.
    "lidia" wrote in message
    news:506500
    [email protected]..
    > I'm trying to use shared memory with LabVIEW. Can I use it, a DLL in C
    > with LabWIEW for use shared Memory?

  • Shared memory used in Web Dynpro ABAP

    Hi Gurus,
    I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?

    Marek Veverka wrote:
    Hi Gurus,
    >
    > I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?
    To my understanding writing to the database is the safe option. There are no other ways to solve your problem with Shared memory.

  • I have an I pad 2 and I down loaded pictures from my old laptop onto it the problem is that it has duplicated my photos and won't let me delete them. It's using up a lot of my memory.

    I have an I pad 2 and I down loaded photos from my old lap top onto my I pad the problem is that it won't let me delete them as they were duplicated when transferring and are using up a lot of my memory. Don't know how to sort this.

    The links below have instructions for deleting photos.
    iOS and iPod: Syncing photos using iTunes
    http://support.apple.com/kb/HT4236
    iPad Tip: How to Delete Photos from Your iPad in the Photos App
    http://ipadacademy.com/2011/08/ipad-tip-how-to-delete-photos-from-your-ipad-in-t he-photos-app
    Another Way to Quickly Delete Photos from Your iPad (Mac Only)
    http://ipadacademy.com/2011/09/another-way-to-quickly-delete-photos-from-your-ip ad-mac-only
    How to Delete Photos from iPad
    http://www.wondershare.com/apple-idevice/how-to-delete-photos-from-ipad.html
    How to: Batch Delete Photos on the iPad
    http://www.lifeisaprayer.com/blog/2010/how-batch-delete-photos-ipad
    (With iOS 5.1, use 2 fingers)
    How to Delete Photos from iCloud’s Photo Stream
    http://www.cultofmac.com/124235/how-to-delete-photos-from-iclouds-photo-stream/
     Cheers, Tom

  • Using Shared memory

    Hi folks,
    This the first time I use shared memory and my question is:
    Does shmat function attache the segment to the same address in diffrent procsses, in anther word can I use the same pointer in process A and B?
    Thanks

    The issue of alignment is rather tricky.
    shmat(2) may perfectly return you misaligned address, so I'd consider using memory-mapped files instead.
    mmap(2) returns page-aligned memory (unless you specify MAP_FIXED and some weired first parameter), so you may rely further on the compiler to do the alignment for you...

  • Short Dump TSV_TNEW_PAGE_ALLOC_FAILED while using shared memory objects

    Hi Gurus,
    We are using shared memory objects to stor some data which we will be reading later. I have implemented the interfce IF_SHM_BUILD_INSTANCE in root class and using its method BUILD for automatic area structuring.
    Today our developments moved from dev system to quality system, and while writing the data into the shared memory using the methods ATTACH_FOR_WRITE and DETACH_COMMIT in one report. We started getting the run time error TSV_TNEW_PAGE_ALLOC_FAILED.This is raised when the method DETACH_COMMIT is called to commit the changes in the shared memory.
    Everyhting works fine before DETACH_COMMIT. I know that it is happening since the program ran out of extended memory, but I am not sure why it is happening at DETACH_COMMIT call. If excessive memory is being used in the program, this run time error should have been raised while calling the ATTACH_FOR_WRITE method or while filling the root class attributes. I am not sure why it is happening at DETACH_COMMIT method.
    Many Thanks in advance.
    Thanks,
    Raveesh

    Hi raveesh,
    as Naimesh suggested: Probably system parameter for shared memory area is too small. Compare the system parameters in devel and QA, check what other shared memory areas are used.
    Regarding your question, why it does not fail at ATTACH_FOR_WRITE but then on DETACH_COMMIT:
    Probably ATTACH_FOR_WRITE will set an exclusive write lock on the shared memory data, then write to some kind of 'rollback' memory and DETACH_COMMIT will really put the data into shared memory area and release the lock. The 'rollback' memory is in the LUW's work memory which is much bigger as the usual shared memory size.
    This is my assumption - don't know who can verify or reject it.
    Regards,
    Clemens

  • Why are the following processes always using lots of memory?

    Can someone tell me why the following processes are always using lots of memory and are always running?
    kernel_task 280MB
    java 300MB
    clamd 120MB
    mds 100MB
    WindowServer 100MB
    coreservicesd 60MB

    Because they require that much memory and because the processes are always in use.
    This is one reason why the minimum system memory requirement is 1 GB.
    Now, you have one third-party process listed, clamd, which is used by ClamXAV anti-virus software. If you can uninstall it you will recover the memory needed by clamd.

Maybe you are looking for

  • Open new URL in same window

    Hi all, I am using "linktourl"  element for open a new url page. This is done by reference property(opens in new window). My need is to open the New URL in same window instead of new...... Anyone can help me plz and send the codings.....

  • IS-Retail Upgrade

    Hi, Is there any documentation available for IS-Retail ECC upgrade including POS, ITS ? Any check-list/questionnaire for integration with POS/ITS? Thanks, Mithun

  • Calling a java script function when the value of an item changes

    Hi, I have got 2 items which are of "Text field(disabled,saves state)" type. The items are "P2_Industry" and "P2_Industry_segment". Have a link next to each item label, so that a pop up opens. Industry pop up shows all the industries and based on the

  • MPEG Streamclip not working on Mac Os X - downloading videos and converting for FCP X

    Hello, I had used MPEG Streamclip (square 5) before but its not loading the url and not finding the files... any ideas? I need to download short free archive clips for this project. I would appreciate your help and suggestions. _M in Sydney

  • Garmin nuvi 50

    hi, i bought a garmin nuvi 50 during my xmas vacation on orlando. i returned to brazil and it doesn't work here. the attendent who helped me chosing the gps, didn't advised me about usb cable or maps. how can you help me? how can i buy them so i can