Bluetooth, shared memory, striped RAID, ideas?

I'm 'building' a computer as an experiment. It's for photo editing & video viewing, communication, and scientific research. You are welcome to help answer my questions, which are trivial to the knowledgeable, but I'm not in any trouble here. Should you wish to read further, I welcome any thoughts or ideas, here or by e-mail (from your alias).
The computer is an Apple Power Mac G4 Quicksilver 2002 800 MHz machine with digital audio. I finally decided I needed an Apple of my own, to make DVDs of my granddaughter. This was after years of building Microsoft (retch) machines for others. However, I'm a geologist, not an engineer; so I must spend some time on the questions.
The machine will be a scientific one, built (as much as possible) with free, open-source software; and using throw-away hardware, culled mostly from dumpsters near the University (where some Yuppie larvae must reside). It's an experiment.
I should like to install three operating systems: MacOS9, MacOSX, and Darwin (running the X11 GUI); and I should like its interface strictly OOUI: not even a dock. It will be for scientific programming, video editing, and communication. The scientific software will use strictly Unix system calls.
The machine is an Apple Power Mac G4 Quicksilver 2002 800 MHz machine with digital audio & lots of space. It has a good Pioneer DVD writer an Nvidea GeForce 6200 graphics card with 256MB video memory, and four USB 2.0 sockets on a PCI card. In a dumpster, I found a pristine trio of Altec Lansing AVS300 speakers. (Motets on an iPod are amazing.) Is digital sound perceptively better?
1. It has no keyboard. I takes a USB keyboard, and not one for Microsoft machine I assume. Is that correct? If so, I shall look for an inexpensive Bluetooth Apple Keyboard (the last one, with the italic letters in corners). This raises the problem:
2. It has no Bluetooth. The only option I'm aware of is the D-Link DBT-120 Bluetooth 2.0 USB adapter. I should prefer one with an antenna, for my computer is a tower, far in the house. Have I any options here?
That's all I need now, unless something like PCI will soon become obsolete. Oh, no:
3. Though it's currently running a Dell D1025TM VGA CRT (which goes to sleep nicely), I should likely operate it from my wife's iBook G3, using the VNC protocol, which was already explained to me. I shall need (I believe) a connector that switches pins at the end of the straight-through Cat 5 cable connecting the Power Mac and iBook. Does anyone know who sells these?
4. Later this year I shall want to replace the VNC connection with a (US $250) wide-screen (16:10), possibly 22" LCD monitor, with HDMI socket (as on my US $40 Philips DivX DVD player) that connects with the DVI (digital) socket on the video card. It should be HDCP compliant and support at least 1080i widths (1080p is too expensive); and swivel vertically. This is also for reading books on DVD (from Google and the Internet Archive). I shall likely acquire 'SwitchResX' to adjust and save monitor settings, and I may use two monitors for programming.
I don't anticipate it's having speakers, though the DVD player may support Dolby Digital (two RCA connectors?) but these are likely analogue. It certainly doesn't support Dolby AC-3. It should appear unlikely I can afford digital audio. Is this correct? I can later afford another used Altec Lansing set, for my granddaughter's bedroom. I anticipate her placing the LCD display (which emits no radiation) on her bed as she listens to 'The Iron Giant' or 'The Last Mimsy' on the analogue speaker trio.
5. Two of its three slots are filled with 256 MB modules each, now summing to 512 MB (which, I believe, it shares with the 256 MB of the Nvidea video card. I understand I can add a 512 MB module to the third socket, summing to 1 GB processor RAM. Is this right? Must it be the same brand as the other two modules, or is a quality 3d-party brand acceptable? Must it be compatible with the video RAM as well?
6. It also came with a 60 GB ATA hard disk. I'm keen on storing data organized, elsewhere; so 60 GB is enough for me. In the past, RAID arrays could be made only of SCSI disks, though now I understand that, should I buy another 60 GB disk of identical architecture, I can create a two-disk, striped RAID array with Mac OSX 10.4.11 (non-server). Is this correct? How closely must the architectures match?
The first three question I am facing now; and the latter three I am researching.
My preferred programming languages are J (as in APL) and Java; and I have never learned Tk/Tkl, which I think I may wish to. I'll likely be creating my own icons, for these are very important in OOUIs, international sounds for events, and using folders organized by activity. The computer will be secure and self-maintaining. I'm looking into GNUstep.
Any ideas or favorite suggestions would be welcomed, here or by e-mail. Thank you very much!
Bruce
PS. In the future!
http://eshop.macsales.com/item/Newer%20Technology/MXP802NPCI/

Anyone? Anyone?
A new symptom.... Last night the "To Archive" share disappeared from the network. I unshared and re-added it as I've been doing, but it didn't reappear on the network. I had to restart the Mac mini to get it back up on the network.

Similar Messages

  • Questions about db_keep_cache_size and Automatic Shared Memory Management

    Hello all,
    I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
    Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
    SQL> select name, value, value/1024/1024 value_MB from v$parameter
    2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
    NAME VALUE VALUE_MB
    sga_max_size 1694498816 1616
    shared_pool_size 0 0
    db_cache_size 0 0
    db_keep_cache_size 0 0
    db_recycle_cache_siz 0 0
    e
    Looking at granularity level:
    SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
    GRANULE_SIZE/VALUE
    2048
    Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
    SQL> show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 1616M
    sga_target big integer 0
    So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
    If that isn't the way to go...let me continue with the table size, etc....
    The table I need to pin is:
    SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
    SUM(BLOCKS)
    4858
    And block size is:
    SQL> show parameter block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    So, the space I'll need in memory for pinning this is:
    4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
    So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
    Thanks in advance for any suggestions and links to info on this.
    cayenne
    Edited by: cayenne on Mar 27, 2013 10:14 AM
    Edited by: cayenne on Mar 27, 2013 10:15 AM

    JohnWatson wrote:
    This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
    Thanks,
    cayenne

  • Shared memory:  apache memory usage in solaris 10

    Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!

    a) How or why does solaris choose to share memory
    between processes
    from the same program invoked multiple times
    if that program has not
    been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
    Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
    If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
    Simply: if we run pmap / ipcs we can see a
    shared memory reference
    for our oracle database and ldap server. There
    is no entry for apache.
    But the total memory usage is far far less than
    all the apache procs'
    individual memory totted up (all 100 of them, in
    prstat.) So there is
    some hidden sharing going on somewhere that
    solaris(2.9) is doing,
    but not showing in pmap or ipcs. (virtually
    no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
    b) Under solaris 10, each apache process takes up
    precisely the
    memory reported in prstat - add up the 100
    apache memory details
    and you get the total RAM in use. crank up the
    number of procs any
    more and you get out of memory errors so it
    looks like prstat is
    pretty good here. The question is - why on
    solaris10 is apache not
    'shared' but it is on solaris 9? We set up
    all the usual project details
    for this user, (jn /etc/projects) but I'm
    guessing now that these project
    tweaks where you explicitly set the shared
    memory for a user only take
    effect for programs explicitly coded to use
    shared memory , e.g. the
    oracle database, which correctly shows up a
    shared memory reference
    in ipcs .
    We can fire up thousands of apaches on the 2.9
    system without
    running out of memory - both machines have the
    same ram !
    But the binary versions of apache are exactly
    the same, and
    the config directives are identical.
    please tell me that there is something really
    simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
    Darren

  • How do debug an application in Shared Memory debug mode using JDev

    I don't see the option of using "Shared memory" debug mode in JDev. There is only socket debug option(Attach/Listen) in debugger but no shared memory debug option.
    Is it missing or is it hidden somewhere.
    Can anyone let me know as all IDEs provide that.

    Any updates?

  • [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL

    Hi All,
    I am running an SSIS solution that runs 5 packages in sequence.  Only one package fails because of the error:
    [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    I have added myself to the performance counters group. 
    I am running windows 7 with SSIS 2008.
    Any ideas would be appreciated.  I have read that some have disabled the warning, but I cannot figure out how to disable a warning. 
    Thanks.
    Ivan

    Hi Ivan,
    A package would not fail due the warning itself, speaking of which means the account executing it is not privileged to load the Perf counters, and should thus be safely ignored.
    To fix visit: http://support.microsoft.com/kb/2496375/en-us
    So, the package either has an error or it actually runs.
    Arthur My Blog

  • FATAL: shared memory region is being held at - Error when installing SAP with sapinst

    Hi guys
    This error is kinda odd since i have installed 3 systems under the same OS/DB Conditions
    Windows 2012 R2 with ASE 15.7.
    14GB Memory
    30GB PageFile
    Firewall Off
    On the previous two systems, when executing a System Copy using an Export File it completed with no problems.
    On this System Installation, during ASE Server Setup, SAPINST fails, checking ASE logs it seems releted to memory, but I cant figure out what is the problem since 14GB would be enough, and it never happened on the previous installations.
    Any idea what could be causing this?
    init logs.
    Finished loading file 'charset.loc'.
    03/13/15 06:23:20 AM End output from 'charset'.
    03/13/15 06:23:20 AM Character set 'utf8' has been successfully installed.
    03/13/15 06:23:20 AM Task succeeded: install a character set(s).
    03/13/15 06:23:20 AM Running task: set the default character set and/or default
                         sort order for the Adaptive Server.
    03/13/15 06:23:20 AM Setting the default character set to utf8
    03/13/15 06:23:20 AM Sort order 'binary' has already been installed.
    03/13/15 06:23:21 AM SQL command: exec sp_configure 'default character set id',
                         190, binary
    03/13/15 06:23:21 AM SQL command: exec sp_configure 'default sortorder id', 25,
                         utf8
    03/13/15 06:23:21 AM Sort order 'binary' has been successfully set to the
                         default.
    03/13/15 06:23:23 AM SQL command: shutdown
    03/13/15 06:23:25 AM Waiting 15 seconds for the operating system to reclaim
                         resources before rebooting.
    03/13/15 06:23:45 AM Calling the shell with
                         '"J:\sybase\AEC\ASE-15_0\bin\sqlsrvr.exe"
                         -d"J:\sybase\AEC\sybsystem\master.dat" -sAEC
                         -e"J:\sybase\AEC\ASE-15_0\install\AEC.log"
                         -i"J:\sybase\AEC\ini" -M"J:\sybase\AEC\ASE-15_0" '.
    03/13/15 06:23:55 AM Calling the shell with '0 = return code.'.
    03/13/15 06:24:11 AM Calling the shell with
                         '"J:\sybase\AEC\ASE-15_0\bin\sqlsrvr.exe"
                         -d"J:\sybase\AEC\sybsystem\master.dat" -sAEC
                         -e"J:\sybase\AEC\ASE-15_0\install\AEC.log"
                         -i"J:\sybase\AEC\ini" -M"J:\sybase\AEC\ASE-15_0" '.
    03/13/15 06:24:11 AM waiting for server 'AEC' to boot...
    03/13/15 06:24:21 AM SERVER ERROR: Failed to boot server 'AEC'.
    03/13/15 06:24:21 AM SERVER ERROR: Couldn't reboot server 'AEC' after changing
                         internal tables.
    03/13/15 06:24:25 AM CONNECTIVITY ERROR: Open Client message: 'ct_connect():
                         network packet layer: internal net library error: Net-Lib
                         protocol driver call to connect two endpoints failed
                         Failed to connect to the server - Error is 10061 No
                         connection could be made because the target machine
                         actively refused it.
    03/13/15 06:24:25 AM CONNECTIVITY ERROR: Initialization of auditinit
                         connectivity module failed.
    03/13/15 06:24:25 AM Task failed: set the default character set and/or default
                         sort order for the Adaptive Server. Terminating
                         configuration.
    03/13/15 06:24:25 AM Configuration failed.
    03/13/15 06:24:25 AM Exiting.
    03/13/15 06:24:25 AM The log file for this session is
                         'J:\sybase\AEC\ASE-15_0\init\logs\log0313.007'.
    03/13/15 06:24:25 AM Log close.
    Install Log
    00:0000:00000:00000:2015/03/13 06:24:13.07 kernel  SySAM: Using licenses from: J:\sybase\AEC\\SYSAM-2_0\licenses\SYBASE.lic;J:\sybase\AEC\\SYSAM-2_0\licenses\SYBASE_ASE_DE.lic
    00:0000:00000:00000:2015/03/13 06:24:13.70 kernel  SySAM: Checked out license for 2 ASE_CORE (2020.1231/permanent/100C 0F04 3F88 1821).
    00:0000:00000:00000:2015/03/13 06:24:13.70 kernel  This product is licensed to: SAP, for use with SAP Business Applications.
    00:0000:00000:00000:2015/03/13 06:24:13.70 kernel  Checked out license ASE_CORE
    00:0000:00000:00000:2015/03/13 06:24:13.70 kernel  Adaptive Server Enterprise (Enterprise Edition)
    00:0000:00000:00000:2015/03/13 06:24:13.71 kernel  Using config area from primary master device.
    00:0000:00000:00000:2015/03/13 06:24:13.71 kernel  Warning: Using default file 'J:\sybase\AEC\AEC.cfg' since a configuration file was not specified. Specify a configuration file name in the RUNSERVER file to avoid this message.
    00:0000:00000:00000:2015/03/13 06:24:13.73 kernel  Allocating a shared memory segment of size 131137536 bytes.
    00:0000:00000:00000:2015/03/13 06:24:13.73 kernel  WARNING: shared memory segment is being held by another application
    00:0000:00000:00000:2015/03/13 06:24:13.75 kernel  FATAL: shared memory region is being held at 131104768 bytes but 131137536 bytes are required
    00:0000:00000:00000:2015/03/13 06:24:13.75 kernel  kbcreate: couldn't create kernel region.
    00:0000:00000:00000:2015/03/13 06:24:13.75 kernel  kistartup: could not create shared memory
    Best
    Martin

    After implementing solution in Note 2011550 - SYB: SAP ASE on Windows fails to start, issue was fixed

  • "shared memory realm" error -- cross-posted in "installation"

    I've just installed 8.1.7 on a Solaris 8 workstation. The installation went fine up until the database creation assistant tried to create a database: it failed. But the software was all installed, so I did a "connect internal" and created a database myself. Once I start the database, I can connect as system/manager; but when the database is shutdown, connecting as system manager gives me an "Oracle not available" error (ORA-01034) followed by a "shared memory realm not available" error (ORA-27101) followed by a "no such file or directory" error (SVR4 Error: 2:).
    All of the shared memory and semaphore parameters are specified in the /etc/system file, and they show up when I execute a "sysdef -i" command.
    Any ideas why I'm getting these errors?
    I did a search and found a few posts on "shared memory realm," but none of them seemed to address the fact that I can connect when the database is started/mounted, but not when it's shutdown.
    Thanks in advance,
    Rich

    make sure the env var ORACLE_SID is set.
    check PROCESSES in your init.ora file. if you lower this number (to say, 15) and errors go away then you need to up the semiphores kernel parameter (or leave PROCESSES set to a low number). Also, it is correct to get an error when trying to connect to a database that is shutdown. the only way to not get an error is use svrmgrl or "sqlplus /nolog" and then connect / as sysdba (or connect internal).
    Andrew

  • Shared memory used in Web Dynpro ABAP

    Hi Gurus,
    I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?

    Marek Veverka wrote:
    Hi Gurus,
    >
    > I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?
    To my understanding writing to the database is the safe option. There are no other ways to solve your problem with Shared memory.

  • SAP installtion stopped - error cant create shared memory

    Dear All,
    Greetings!
    We are trying to install SAP ECC 6.0 IDES system on Windows 2003 x64 Server and DB2 9.1, during the process of installation - in the step of Start Instance. The sapinst Gives Up! the process since the enqueue server is found in stopped state when the instance tries to start up.
    I found the below given error message from the Developer Trace files of the enqueue server.
    [Thr 1384] Sat May 09 18:21:13 2009
    [Thr 1384] *** ERROR => ShmDelete: Invalid shared memory Key=34. [shmnt.c      719]
    [Thr 1384] *** ERROR => ShmCleanup: ShmDelete failed for Key:34. [shmnt.c      793]
    [Thr 1384] initialize_global: enqueue server without replication
    [Thr 1384] Enqueue: EnqMemStartupAction Utc=1241873473
    [Thr 1384] *** ERROR => [CreateOsShm] CreateFileMapping(37,65 KB) failed with Err=5
                  ERROR_ACCESS_DENIED: Access is denied.  [shmnt.c      2174]
    [Thr 1384] *** ERROR => ShmCreate: Create (37,67072,1) failed [shmnt.c      506]
    To note - we had a virus attack on the server recently and an Anti-Virus tool was used to clean the server, after that I found most of the SAP folders in Read-Only mode.
    I suspect any causes of the same for the above mentioned ACCESS_DENIED error. Currently I have allocated 28GB of swap size, but the SAP instance is not able to create a shared memory from the same I hope.
    Num
    Pagefile
    Min.Size
    Max.Size
    Avail.Max
    Curr.Size
    1
    c:\pagefile.sys
    8192000 K
    8192000 K
    8192000 K
    8192000 K
    2
    e:\pagefile.sys
    10485760 K
    10485760 K
    10485760 K
    10485760 K
    3
    f:\pagefile.sys
    10485760 K
    10485760 K
    10485760 K
    10485760 K
    Please help me with your suggestions for the workaround,
    - How will I be able to enable the swap size of the server to be used by the SAP instance?
    - Is this the effect of the anti-virus or an aspect in windows server to change the folders and files to read-only after a virus attack?
    I have tried the possibilities of adding more shared memory, removing the shared memory and restarting the OS and assigning back the same, but these dint prove useful.
    Kindly help me with your suggestions.
    Thank you
    Regards,
    Vineeth

    Hi,
    I would suggest you to go to run > services.msc
    now try to manually stop / start the SAP<SID>_<nr> services. are you able to start it properly? If you get error here, that means SAP services not able to start as it has permission problem.
    login with <sid>adm & reregister the service by running sapstartsrv.exe in <drive>:\usr\sap\SID\sys\exe. after you give the parameters and press ok, wait for sometime for the 'success' message.
    once its done, then start sap in MMC.
    another thread talks about similar kind of problem.
    Shared Memory Ceation error when we install NW04S Java Stack.
    Regards,
    Debasis.

  • Shared Memory Ceation error when we install NW04S Java Stack.

    Hi.
    I trying to install NW04S Java Stack in Windows 2003/ MS SQL but We got error to starting enqueue service for Java WAS.
    Error is
    [Thr 4364] Sun Mar 12 23:13:34 2006
    [Thr 4364] *** ERROR => ShmDelete: Invalid shared memory Key=34. [shmnt.c      719]
    [Thr 4364] *** ERROR => ShmCleanup: ShmDelete failed for Key:34. [shmnt.c      793]
    [Thr 4364] initialize_global: enqueue server without replication
    [Thr 4364] Enqueue: EnqMemStartupAction Utc=1142172814
    [Thr 4364] *** ERROR => [CreateOsShm] CreateFileMapping(37,36 KB) failed with Err=5
                  ERROR_ACCESS_DENIED: Access is denied.  [shmnt.c      2174]
    [Thr 4364] *** ERROR => ShmCreate: Create (37,37664,1) failed [shmnt.c      506]
    Anyone have idea for fixing this problem ?
    Reagrds, Arnold.

    Hi Arnold,
    Kindly check if the User SAPService<SID> is a member of
    the local administrator group. If not add manually and
    try again.
    Hope it helps.
    Regards
    Srikishan

  • Shared memory Problem

    Operating system Windows 2003 Server
    When i am trying to connect to the oracle 10g database we are facing the following error..
    ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool",select /*+ rule */ bucket_cn...","Typecheck heap","kgghteInit")
    But if i restart the Server ther the error disappears..
    Please help me regarding this problem....

    can you give me any idea how much memory i have to set for Shared Pool It's all depends on the number of users , the amount of transcation , the nature of your database, application nature and whole lots of things,for temporary solution, you must increase the size of shared pool in oreder to get your db up and running atleast for some time.
    hare krishna
    Alok

  • SAPOsCol running but not working (shared memory not available)

    Dear Forum,
    We have just set new passwords for SAP AD users (incl SAPService<SID> and <SID>adm), after this restarted SAP instances and servers / clusters). Everything came up well and all services, incl SAPOsCol started up automatically with the new passwords.
    Everything works fine - only this morning i wanted to have a look in ST06 and the program tells me "SAPOSCOL not running? (shared memory not available).
    The service is running tho, would a restart of the service do any difference, and is there any possibility that a restart of the service on a live and running system would force a system restart? (i couldnt think of any, but I hate restarting services on a live running production system).
    Would it be an option to stop/start the collector from ST06?
    Thanks in advance,
    Kind regards,
    Soren

    Hello Kaushal,
    Thank you for your answer!
    I tried to start and stop/start the service from ST06, but it doesnt work. Here is a content og the log - any ideas how i could start the collector without booting the server?
    HWO description
          SAPOSCOL version  COLL 20.95 701 - 20.64 NT 07/10/17, 64 bit, multithreaded, Non-Unicode
          compiled at   Feb 24 2009
          systemid      562 (PC with Windows NT)
          relno         7010
          patch text    COLL 20.95 701 - 20.64 NT 07/10/17
          patchno       32
          intno         20020600
          running on    L5183N01 Windows NT 6.0 6002 Service Pack 2 16x AMD64 Level 6 (Mod 26 Step 5)
          PATCHES
          DATE     CHANGELIST           PLATFORM             PATCHTEXT
          20081211 1032251              ALL                  Option -w support. Removed SizeOfRecord warning rt 130.
          20090114 1036522              UNIX                 Log file permissions: 664.
          20090203 1041526              UNIX                 Add. single instance check.
          20090210 1042962              ALL                  Continue after EWA HW XML generation failure.
    09:46:39 12.04.2010   LOG: Profile          : no profile used
    09:46:39 12.04.2010   LOG: Saposcol Version  : [COLL 20.95 701 - 20.64 NT 07/10/17]
    09:46:39 12.04.2010   LOG: Working directory : C:\usr\sap\PRFCLOG
    09:46:39 12.04.2010   LOG: Allocate Counter Buffer [10000 Bytes]
    09:46:39 12.04.2010   LOG: Allocate Instance Buffer [10000 Bytes]
    09:46:40 12.04.2010   LOG: Shared Memory Size: 118220.
    09:46:40 12.04.2010   LOG: Connected to existing shared memory.
    09:46:40 12.04.2010   LOG: MaxRecords = 1037 <> RecordCnt + Dta_offset = 1051 + 61
    09:46:55 12.04.2010 WARNING: WaitFree: could not set new shared memory status after 15 sec
    09:46:55 12.04.2010 WARNING: Cannot create Shared Memory
    thanks!
    Soren
    Edited by: Soeren Friis Pedersen on Apr 12, 2010 9:49 AM

  • TNS-01115: OS error 28 creating shared memory segment of 129 bytes

    hi
    we are operating a solaris v5.8 with 10 instances of 10.2.0.1 databases running. each with its own listener. the system shmmni=3600 and using ipcs all are being used causing the error TNS-01115: OS error 28 creating shared memory segment of 129 bytes to occur.
    The kernal parameters were set to be the same as a similiar server we have with the same configuration and more databases and that box uses only 53 memory segments
    Does anyone have any ideas as to what would make this happen?

    i wish i could. there was one db that was not needed so i just shut it down and stopped the listener. then took an ipcs -m reading. it returned 48 rows, instead of 3603 as it did when this particular db was up. in my haste i removed the db as it was not needed so i no longer have the logs to research. too bad on my part.
    well at least i have a fix but have no idea why this happened. thank you for your responses. greatly appreciated.

  • Shared memory - free_instance method doesn't work

    I am using shared memory to store data, set the data in a program and then read in an update task.  My write and read both work, but I would like to "delete" the shared memory instance after the read in the update task.  I am using the free_instance method - it works sometimes, but a very small percentage of the time.  Here is the code I am using:
    To write to shared memory:
    data: lv_inst        type shm_inst_name,
              lo_shared_area type ref to z_cl_binkill_shared_area,
              lo_root        type ref to z_cl_binkill_rmnqty.
        lv_inst = ordim_confirm-matid.
        shift lv_inst LEFT DELETING LEADING space.
        concatenate ordim_confirm-who ordim_confirm-vlpla lv_inst into lv_inst.
        try.
        Attach Shared Area
          lo_shared_area = z_cl_binkill_shared_area=>attach_for_write( lv_inst ).
        Create Root Object ( Object to be created in Memory )
          create object lo_root area handle lo_shared_area.
        Set the value for our message
          lo_root->set_data( ordim_confirm ).
        Set the root back into the Area
          lo_shared_area->set_root(  lo_root ). "<- Note the 2 spaces before go_root (didn't work without!)
        Commit and detatch
          lo_shared_area->detach_commit( ).
        catch cx_shm_attach_error.
        endtry.
    To read from shared memory and then (hopefully) delete:
    DATA: lv_inst          TYPE shm_inst_name,
            ls_ordim_confirm TYPE /scwm/s_rf_ordim_confirm,
            lv_rc            TYPE shm_rc,
            lo_shared_area   TYPE REF TO z_cl_binkill_shared_area,
            lo_root          TYPE REF TO z_cl_binkill_rmnqty.
    lv_inst = ls_ordim_c-matid.
              SHIFT lv_inst LEFT DELETING LEADING space.
              CONCATENATE ls_ordim_c-who ls_ordim_c-vlpla lv_inst INTO lv_inst.
              TRY.
                  lo_shared_area = z_cl_binkill_shared_area=>attach_for_read( lv_inst ).
                  lo_root        = lo_shared_area->root.
                  TRY.
                      ls_ordim_confirm = lo_root->get_data( ).
                    CATCH cx_root.
                      CLEAR ls_ordim_confirm.
                  ENDTRY.
                  lo_shared_area->detach( ).
                  lv_rc = lo_shared_area->free_instance( lv_inst ).
                CATCH cx_shm_attach_error.
                  CLEAR ls_ordim_confirm.
              ENDTRY.
    Any ideas why the free_instance doesn't work?
    Jeff Mathieson

    If an UICommand element inside a UIData table does not invoke the method behind the action binding, then this generally means that the row object is out of the scope while the action event is getting to be fired. Test it by putting the managed bean in session scope and see if it will solve the problem. If so, and if you want to keep the managed bean in the request scope, then change/rearrange your data loading logic so that the data is available during at least the invocation phase of the JSF lifecycle. Or split the backing bean in two beans, one for strictly request-scoped data and another for strictly session-scoped data.

  • Shared Memory Areas not accessible in background processing

    Hi there
    We're using Shared Memory Areas to share data between an on-line transaction and a background program. The on-line transaction collects the data, creates the Shared Memory Area and submits a background job to process it. So, after the detach_commit, it performs submit program via job_name xxxx number nnnn.
    It works OK in our dev and quality systems, but does not work in our production system. The memory area is created OK, but the job cannot access it. In the background program, we get the message:
    "The read lock on the instance 'ZNNNNNNN' in the client '100' of the shared objects memory area 'ZCL_XXX' cannot be set, since there is no active version of this area instance. However, the area constructor for the automatic construction of the area instance was started."
    On the SMA side, we've tried seemingly every combination, even "propagate_instance".
    The job seems to be running on the same application server as the SMA.
    Any ideas about what we're doing wrong, anyone?
    Thanks
    Ged

    Hello Ged,
    Please remember that SHM is accessible across one App Server(AS) only.
    If your SAP landscape has multiple AS's then it might pose a problem. When you're scheduling the background job, i guess that the Bgd work process is started in a different AS(load balancing ) and hence SHM is no accessible.
    If this is the case, then you've no other option but to store the data persistently in DB clusters viz., INDX-type tables (EXPORT TO/IMPORT FROM DATABASE).
    BR,
    Suhas
    Message was edited by: Suhas Saha

Maybe you are looking for

  • Oracle EBS with Windows vista

    I recently brought HP laptop, which came with windows vista OS. I can't go back to XP. Drivers does not support any other OS. Did any body try installing oracle apps version 11 down loaded from oracle site. Please advise me!!!

  • I have a Movie In I tunes and when my new touch sync's it is not collecting

    I am a new user of the touch, i have downloaded a film which i can see in the itunes application on my pc, when my ipod syncs it is not picking up the movie, also it has picked up a couple of music videos, should the movie be stored in the same place

  • Using one controller for two Soft Synths

    Hey guys, Does anyone know how to use one MIDI controller to play two soft synths? In particular, I would like to use the bottom half of the controller's range to play one instrument, and the top half to control another. Any suggestions? Thanks!

  • Catalog functionality from ME51N screen(PR)

    Hi  , Standard SAP provides catalog icon using which we can goto vendor catalog and choose items to be procured. But SAP restriction is that only one catalog vendor can be accessed. Is there a way by which we can overcome this limitation. Thanks

  • 10.5.2 & Linotype FontExplorer X

    Anyone else having issues with Linotype's FontExplorer X? Safari refuses to load certain pages when FE is running [daringfireball, pitchforkmedia, for examples.] iPhoto wouldn't start up. Quitting FE fixes the issue, but then of course I have to use