Multithreading and partitioned shared memory
Hi All,
I'm having no success with this (simple?) multithreading problem on my core-i7 processor, using CVI 9.0 (32-bit compiler).
In the code snippets below, I have a node level structure of 5 integers, and I use 32 calls to calloc() to allocate space for 32 blocks of 128*128 (16K) nodes and store the returned pointers in an array as a global var.
Node size in bytes = 20, block size in bytes = (approx) 328KB, total allocated size in bytes = (approx) 10.5MB.
I then spawn 32 threads, each of which is passed a unique index into the "node_space" pointer_array (see code below), so each thread is manipulating (reading/writing) a separate 16K block of nodes.
It should be thread safe and scale by the number of threads because each thread is addressing a different memory block (with no overlap), but multithreading goes no faster (maybe slightly) than a single thread.
I've tried various threadpool sizes, padding nodes to 16 and 64 byte boundaries, all to no avail.
Is this a memory bandwidth problem due to the size of the arrays? Does each thread somehow load the whole 32 blocks? Any help appreciated.
struct Nodes
unsigned int a;
unsigned int b;
unsigned int c;
unsigned int d;
unsigned int e;
typedef struct Nodes Nodes;
typedef Nodes *Node_Ptr;
Node_Ptr node_space[32]; /* pointer array into 32 separate blocks ( loaded via individual calloc calls for each block) */
.... Thread Spawning ....
for (index = 0; index < 32; ++index)
CmtScheduleThreadPoolFunction(my_thread_pool_handle, My_Thread_Function, &index, NULL);
Solved!
Go to Solution.
Hello CVI_Rules,
It's hard to answer your question because it depends on what you are doing in your thread function. Since you are not seeing any speed up in your program when you change the number of threads in your thread pool, you are either doing too much (or all of the work) in each thread, serializing your threads with locks, or somehow slowing down execution in each thread.
Your basic setup looks fine. You can simplify it slightly by passing the nodes directly to your thread function:
for (index = 0; index < 32; ++index)
CmtScheduleThreadPoolFunction(pool, My_Thread_Function, node_space[index], NULL);
static int My_Thread_Function(void *functionData)
Node_Ptr nodes = functionData;
But that's not going to affect performance.
Things to look into:
Verify that you're really only working on one subset of the node space in each thread, that you're passing and receiving the correct node space in each thread and that you're working only on that one.
Verify that you don't have any locks or other synchronization in your program. It sounds like you don't because you designed your program so that it wouldn't need any. But check anyway.
Verify that you're not doing something unnecessary in your thread function. Sometimes people call ProcessSystemEvents or ProcessDrawEvents because they feel that it makes the UI more responsive. These two functions are expensive (around 20ms per call, I think). So if you are calling these functions in a loop, with a fixed total number of iteraction across all threads, and if the actual computations are relatively fast, then these functions can easily dominate the execution time of your program. (It need not be these functions, it might be other ones. These are just examples.)
Show and explain your code to a colleague. Sometimes you don't see the obvious until you show it to someone. Or they might notice something.
Apart from that, can you explain what you are doing in your thread function so that we can have a better understanding of your program and what might inhibit parallelism?
Similar Messages
-
Large SGA On Linux and Automatic Shared Memory Management problem
Hello
I use Oracle10gR2 in linux 32bit and I use http://www.oracle-base.com/articles/linux/LargeSGAOnLinux.php manual
for larger SGA it works fine but when I set sga_target parameter for using Automatic Shared Memory Management
I recieve this error
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00824: cannot set sga_target due to existing internal settings, see alert
log for more information
and in alert log it has been wrote
Cannot set sga_target with db_block_buffers set
my question is when using db_block_buffers can't use Automatic Shared Memory Management ?
Is any solution for using both Large SGA and Automatic Shared Memory Management ?
thanks
Edited by: TakhteJamshid on Feb 14, 2009 3:39 AMTakhteJamshid wrote:
Do it means that when we use large SGA using Automatic Shared Memory Management is impossible ?Yes its true. An attempt to do so will result inthis,
>
ORA-00825: cannot set DB_BLOCK_BUFFERS if SGA_TARGET or MEMORY_TARGET is set
Cause: SGA_TARGET or MEMORY_TARGET set with DB_BLOCK_BUFFERS set.
Action: Do not set SGA_TARGET, MEMORY_TARGET or use new cache parameters, and do not use DB_BLOCK_BUFFERS which is an old cache parameter.>
HTH
Aman.... -
Questions about db_keep_cache_size and Automatic Shared Memory Management
Hello all,
I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
SQL> select name, value, value/1024/1024 value_MB from v$parameter
2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
NAME VALUE VALUE_MB
sga_max_size 1694498816 1616
shared_pool_size 0 0
db_cache_size 0 0
db_keep_cache_size 0 0
db_recycle_cache_siz 0 0
e
Looking at granularity level:
SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
GRANULE_SIZE/VALUE
2048
Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1616M
sga_target big integer 0
So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
If that isn't the way to go...let me continue with the table size, etc....
The table I need to pin is:
SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
SUM(BLOCKS)
4858
And block size is:
SQL> show parameter block_size
NAME TYPE VALUE
db_block_size integer 8192
So, the space I'll need in memory for pinning this is:
4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
Thanks in advance for any suggestions and links to info on this.
cayenne
Edited by: cayenne on Mar 27, 2013 10:14 AM
Edited by: cayenne on Mar 27, 2013 10:15 AMJohnWatson wrote:
This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
Thanks,
cayenne -
ERROR - ORA-01034: shared memory realm does not exist
Hallo!I am a newbie in Oracle in Linux.I have just installed Oracle 10g in Oracle Eenterprise Linux version 4 Update 7.The installation was successful and I could
work with sqlplus,isqlplus and Enterprise Manager.When I restarted my machine,I manually started the listener,OEM and isqlplus which started successfully.
However,when I try to log into OEM and isqlplus,the error message below appears
ERROR - ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux Error: 2: No such file or directory
How do I resolve this?
Thanks.4joey1 wrote:
However,when I try to log into OEM and isqlplus,the error message below appears
ERROR - ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux Error: 2: No such file or directory An Oracle instance consists of a number of Oracle server processes (the limbs) and a shared memory area (the brain). Each and every server process participating in that Oracle instance needs to attach to the shared memory area.
The error message you see, states that the server process (launched in order to service your sqlplus/OEM client), failed to find and attach to this shared memory segment.
Two basic reasons for the failure.
The Oracle instance is not running. There are no shared memory area and Oracle server processes running for that instance. Solution: start up the database instance.
The server process was launched with the incorrect parameters (ORACLE_SID specifically) and attempted to attach to shared memory that does not exist. Solution: review the TNS/JDBC parameters of the client connection and configuration of the Oracle Listener to ensure that a server process launched to service a client, does so with the correct parameters and environment. -
Read & Write Shared memory with Java
Hi,
I have to read and write from/to shared memory four integers with Java, using Posix package (the shared memory is already created).
I've looking in google for any examples that attachs, read, and write to shared memory using that package but i couldn't find anything (that's the reason to write in this forum).
Posix package link
http://www.bmsi.com/java/posix/docs/Package-posix.html
Operative System
Suse 10
Could anyone help me with any example, please?
Thank you very much
OscarHi, i can't post any code because i have no idea about posix and shared memory.
I came from web world and my company has send me to space department, two weeks ago, that use Unix as its operative system.
The first thing i have to do it's what i post but i don't know even how to start doing it, because this is almost the first time i hear about reading and writing shared memory.
Java is a high level and non-dependent platform (but this kind of things i think are dependent platform, in this case the opearative system) and it's few time since i am working with this operative system.
That's the trouble i don't know how to start working with this.
Thanks again -
ORA-01031: insufficient privileges and shared memory realm does not exist
Hi all,
I came to a dead end to start oracle 10.2 database. I have searched on google and this forum, none of these solutions work for me. PS, I have installed 11g on my machine too.
I have set up ORACLE_SID,ORACLE_HOME to 10.2 database based on the tnsnames.ora.
follow is error message:
sqlplus sys as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Apr 3 02:09:54 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter password:
ERROR:
ORA-01031: insufficient privileges
sqlplus /nolog
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Apr 3 02:10:55 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL> conn / as sysdba
ERROR:
ORA-01031: insufficient privileges
SQL> conn scott/tiger
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
First I thought the instance has been start yet, but since I can't login with sysdba. I don't know what other options.
For 10.2, the tnsnames.ora
ORA102 =
+(DESCRIPTION =+
+(ADDRESS = (PROTOCOL = TCP)(HOST =XXX)(PORT = 1523))+
+(CONNECT_DATA =+
+(SERVER = DEDICATED)+
+(SERVICE_NAME = ora102)+
+)+
+)+
LISTENER_ORA102 =
+(ADDRESS = (PROTOCOL = TCP)(HOST =XXX)(PORT = 1523))+
EXTPROC_CONNECTION_DATA =
+(DESCRIPTION =+
+(ADDRESS_LIST =+
+(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC2))+
+)+
+(CONNECT_DATA =+
+(SID = PLSExtProc)+
+(PRESENTATION = RO)+
+)+
+)+
listener.ora:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /data/oracle/ora102)
(PROGRAM = extproc)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC2))
(ADDRESS = (PROTOCOL = TCP)(HOST =XXXXX)(PORT = 1523))
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)try do this steps on server side:
1) sqlplus sys as sysdba
2) select open_mode from v$database;
show result 2 step -
Hello, I have two questions on time capsule I can only have it on my external hd files and free up my internal memory to my mac I can use an external hard drive, in my case a lacie rugged as shared memory for my two computers
I have a mackbook pro and an iMac if I buy a time capsule 2tb airport, I can use it with time machine and what would be the best way to use it.
There is no particular setup required for TM.. both computers will create their own backup sparsebundle which is like a virtual disk.. Pondini explains the whole thing if you read the reference I gave you.
and how to use time capsule airport whit other external hd to use my old lacie airport with the new time capsule
Up to you.. you can plug the external drive into the TC and enjoy really slow file transfers or you can plug it into your computer and use it as external drive.. which is faster than the TC.. and TM can include it in the backup.
Again everything is explained in the reference.. you are not reading it. -
ORA-27101: shared memory realm does not exist and ORA-01139: RESETLOGS
HI ALL,
After the oracle has installed on a solaries server successfully, i am unable to login into oracle database.
Below are the errors messages :
1) ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
SVR4 Error: 2: No such file or directory
2) Media recovery complete.
alter database open resetlogs
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery.
and When i try to login into oracle as root .
this is the error :
sqlplus / as sysdba
SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 12 15:52:49 2010
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
ld.so.1: oracle: fatal: libskgxp10.so: open failed: No such file or directory
ERROR:
ORA-12547: TNS:lost contact
my Operating system details :
SunOS Blade3-Chassis2 5.10 Generic_142900-03 sun4v sparc SUNW,Netra-CP3060
Anyone please help me here. waiting for your reply.
Below are the ORACLE_SID Details :
1) echo $ORACLE_SID ----- > ems
2)ps -eaf | grep smon -------> ora_smon_ems
do you need any data?
Edited by: user8959150 on Feb 11, 2010 7:19 PMHi
We mostly receive this error message ( ORA-27101: shared memory realm does not exist and ORA-01139)
When our SGA_MAX_SIZE is greater than 1.7 G.b on
You again configure your sga size
and sga_max_size paramete will not greater than 1.7 G.b
Hope your database will work correctly.
for test purpose you set all your init.ora parameter
db_cache_size=512m
db_keep_cache_size=64m
db_recycle_cache_size=16m
java_pool_size=8m
large_pool_size=16m
shared_pool_size=256m
shared_pool_reserved_size=64m
sga_max_size=1600m
sga_max_size will not be grater than combination of of db_cache_size db_keep_cache_size,db_recycle_cache_sizejava_pool_size+share_pool_size+share_pool_reserved_size
Hope it will work fine.
Regard
Abdul Hameed Malik
[email protected] -
Enhanced protected mode and global named shared memory object
Good morning.
I've written a bho that do data exchange with a system service. The service creates named shared memory objects in the Global namespace. Outside appcontainer IE 11 sandboxed everything works fine lowering objects integrity level. Inside the sandboxed environment
OpenFileMappingW seems to return a valid handle but the calls to MapViewOfFile always gives access denied. What i'm missing? Thank you.
Service code for security descriptor creation:
if (InitializeSecurityDescriptor(pSA->lpSecurityDescriptor, SECURITY_DESCRIPTOR_REVISION))
if (ConvertStringSecurityDescriptorToSecurityDescriptorW(L"D:P(A;;GA;;;WD)(A;;GA;;;AC)", SDDL_REVISION_1, &pSecDesc, NULL) == TRUE)//
BOOL fAclPresent = FALSE;
BOOL fAclDefaulted = FALSE;
if (GetSecurityDescriptorDacl(pSecDesc, &fAclPresent, &pDacl, &fAclDefaulted) == TRUE)
bRetval = SetSecurityDescriptorDacl(pSA->lpSecurityDescriptor, TRUE, pDacl, FALSE);
if (bRetVal ==TRUE && ConvertStringSecurityDescriptorToSecurityDescriptorW(L"S:(ML;;NW;;;LW)", SDDL_REVISION_1, &pSecDesc, NULL) == TRUE)
BOOL fAclPresent = FALSE;
BOOL fAclDefaulted = FALSE;
if (GetSecurityDescriptorSacl(pSecDesc, &fAclPresent, &pSacl, &fAclDefaulted) == TRUE)
bRetval = SetSecurityDescriptorSacl(pSA->lpSecurityDescriptor, TRUE, pSacl, FALSE);
OutputDebugStringW(L"SACL ok.");
return bRetval;
BHO code
LPMEMORYBUFFER OpenDataChannel(HANDLE *hQueue)
LPMEMORYBUFFER lp = NULL;
WCHAR data[512] = { 0 };
for (int a = 0;; a++)
if(iestatus==FALSE)StringCchPrintfW(data, 512,L"Global\\UrlfilterServiceIE.%d", a);//NOT in EPM
else StringCchPrintfW(data, 512, L"%s\\Global\\UrlfilterServiceIE.%d",appcontainernamedobjectpath, a);//in EPM
*hQueue = OpenFileMappingW(FILE_MAP_ALL_ACCESS, TRUE, data);//FILE_MAP_ALL_ACCESS
if (*hQueue != NULL)
//file mappato esistente
lp = (LPMEMORYBUFFER)MapViewOfFile(*hQueue, FILE_MAP_ALL_ACCESS, 0, 0, sizeof(MEMORYBUFFER));//FILE_MAP_ALL_ACCESS
if (lp != NULL)Ciao Ritchie, thanks for coming here. ;-)
I call (only) OpenFileMapping and MapViewOfFile inside my code and i get access denied at the first try. As i stated before this happens when IE11 is working inside EPM 32/64bit mode, outside EPM it works as it should. However i decided to take a different
approach to do IPC, because, you want it or not, the service is up and running as SYSTEM... Still i'm really interested about two points:
1) can i use global kernel objects in EPM mode?
2) if one is "yes, you can": what's wrong with my code? The security descriptor? Something else?
Thanks all. -
SHARED MEMORY AND DATABASE MEMORY giving problem.
Hello Friends,
I am facing problem with EXPORT MEMORY and IMPORT MEMORY.
I have developed one program which will EXPORT the internal table and some variables to the memory.This program will call another program via background job. IMPORT memory used in another program to get the first program data.
This IMPORT command is working perfect in foreground. But it is not working in background.
So, I have reviewed couple of forums and I tried both SHARED MEMORY AND DATABASE MEMORY. But no use. Still background is giving problem.
When I remove VIA JOB parameter in the SUBMIT statement it is working. But i need to execute this program in background via background job. Please help me . what should I do?
pls find the below code of mine.
option1
EXPORT TAB = ITAB
TO DATABASE indx(Z1)
FROM w_indx
CLIENT sy-mandt
ID 'XYZ'.
option2
EXPORT ITAB FROM ITAB
TO SHARED MEMORY indx(Z1)
FROM w_indx
CLIENT sy-mandt
ID 'XYZ'.
SUBMIT ZPROG2 TO SAP-SPOOL
SPOOL PARAMETERS print_parameters
WITHOUT SPOOL DYNPRO
*_VIA JOB name NUMBER number*_
AND RETURN.
===
Hope every bidy understood the problem.
my sincere request is ... pls post only relavent answer. do not post dummy answer for points.
Thanks
RaghuHi.
You can not exchange data between your programs using ABAP memory, because this memory is shared between objects within the same internal session.
When you call your report using VIA JOB, a new session is created.
Instead of using EXPORT and IMPORT to memory, put both programs into the same Function Group, and use global data objects of the _TOP include to exchange data.
Another option, is to use SPA/GPA parameters (SET PARAMETER ID / GET PARAMETER ID), because SAP memory it is available between all open sessions. Of course, it depends on wich type of data you want to export.
Hope it was helpful,
Kind regards.
F.S.A. -
I copied my Visual Studio 2008 projects to a new computer and am having trouble getting the database to attach to my website to continue with it.
I uninstalled SQL 2005 express and installed SQL 2008 R2 Express.
When I looked in the SQL Server configuration manager, I wasn't sure what should be running and what should be stopped and what protocols should be enabled, etc so I started everything and enabled everything.
However, the SQL Server Agent will not start. Maybe I did something wrong by enabling everything. I have everything to start automatically, and all enabled:
SQL Native Client: I right clicked and opened and under client protocols, have enabled all: Shared memory, TCP/IP , named pipes, VIA
and under SQL Server Network Configuration, I opened that and under there is listed protocols for SQLEXPRESS AND I enabled same things: Shared memory, TCP/IP , named pipes, VIA
When I could not start the SQL Server Agent before from the SQL Server configuration mgr. I went into services and started it that way and it did start. But since I have rebooted, it will not start that way either and I now get this message: Windows could
not start the SQL Server Agent(SQLEXPRESS) Service on local computer. Error 1067
I'm just trying to get my visual studio project working again. Would appreciate any help. Maybe I should uninstall SQL and reinstall 2005??Windows could not start the SQL Server Agent(SQLEXPRESS) Service on local computer.
As the others already wrote, with SQL Server Express in Version 2008 the "SQL Server Agent" will be installed, but it's not a Feature of the Express Edition and therefore you can't start & use it.
In SQL Server 2005 Express the Agent was completly missing (not installed), therefore you haven't this "issue" before.
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Solaris 10, Oracle 10g, and Shared Memory
Hello everyone,
We've been working on migrating to Solaris 10 on all of our database servers (I'm a UNIX admin, not a DBA, so please be gentle) and we've encountered an odd issue.
Server A:
Sun V890
(8) 1.5Ghz CPUs
32GB of RAM
Server A was installed with Solaris 10 and the Oracle data and application files were moved from the old server (the storage hardware was moved between servers). Everything is running perfectly, and we're using the resource manager to control the memory settings (not /etc/system)
The DBAs then increase the SGA of one of the DBs on the system from 1.5GB to 5GB and it fails to start (ORA-27102). According to the information I have, the maximum shared memory on this system should be 1/4 of RAM (8 GB, actually works out to 7.84 GB according to prctl). I verified the other shared memory/semaphore settings are where they should be, but the DB would not start with a 5 GB SGA. I then decided to just throw a larger max shared memory segment at it, so I used the projmod to increase project.max-shm-memory to 16GB for the project Oracle runs under. The DB now starts just fine. I cut it back down to 10GB for project.max-shm-memory and the DB starts ok. I ran out of downtime window, so I couldn't continue refining the settings.
Running 'ipcs -b' and totalling up the individual segments showed we were using around 5GB on the test DB (assuming my addition is correct).
So, the question:
Is there a way to correlate the SGA of the DB(s) into what I need the project.max-shm-memory to? I would think 7.84GB would be enough to handle a DB with 5GB SGA, but it doesn't appear to be. We have some 'important' servers getting upgraded soon and I'd like to be able to refine these numbers / settings before I get to them.
Thanks for your time,
StevenTo me, setting a massive shared memory segment just seems to be inefficient. I understand that Oracle is only going to take up as much memory (in general) as the SGA. And I've been searching for any record of really large shared memory segments causing issues but haven't found much (I'm going to contact Sun to get their comments).
The issue I am having is that it doesn't make sense that the DB with a 5GB SGA is unable to startup when there is an 8GB max shared memory segment, but a 10GB (and above) seems to work. Does it really need double the size of the SGA when starting up, but 'ipcs' shows it's only using the SGA amount of shared memory? I have plans to cut it down to 4GB and test again, as that is Oracle's recommendation. I also plan to run the DB startup through truss to get a better handle on what it's trying to do. And, if it comes down to it, I'll just set a really big max shared memory segment, I just don't want it to come back and cause an issue down the road.
The current guidance on Metalink still seems to be suggesting a 4GB shared memory segment (I did not get a chance to test this yet with the DB we're having issues with).
I can't comment on how the DBA specifically increased the SGA as I don't know what method they use. -
Shared memory (System V-style) - High usage of phys memory and page outs
Hi!
I get a much higher usage of physical memory when I use shared memory than I would expect. Please, I would really need someone to confirm my conclusions, so that I can revise my ignorance in this subject.
In my experiments I create a shared memory segment of 200 MB and I have 7 processes attaching to it. I have a ws of 1 GB.
I expect to see what I see when I attach to the shared memory segment in terms of virtual size, i.e. SIZE in prstat. After attaching (mapping it) to the process all 7 processes are about ~203 MB each and this makes sense. RSS, in prstat, is about 3 MB for each process and this is ok to me too.
It is what I see when each of the 7 processes start to write a simple string like 'Hello!' in parallel to each page in their shared memory segment that I get surprised. I run out of memory on my ws after a while and the system starts to wildly page out physical memory to disk so that the next 'Hello!' can be written to the next page in the shared memory segment. It seems that each page written to in the shared memory chunk is mapped into each process private address space. This means that the shared memory is not physically shared, just virtually shared. Is this correct?
Can memory be physically shared, so that my 7 processes only use 200 MB? ISM? DISM?
I create a shared memory segment in a C-program with the following calls:
shmid = shmget(key, SHM_SIZE, 0644 | IPC_CREAT)
data = shmat(shmid, (void *)0, 0);Thanks in advance
/SuneYour problem seemed reasonable. What were you doing wrong?
Darren -
Migration 8i to 9iR2 and "Shared Memory Realm does not exist"
Dear Experts,
I read from Note 159657.1 in step 22
If you are using a passwordfile set the parameter
REMOTE_LOGIN_PASSWORDFILE=NONE (If you are using windows also set the
parameter SQLNET.AUTHENTICATION_SERVICES=(NTS) in the sqlnet.ora file.
The issue is I can't restart my instance in step 32 (same note) because I need to have SQLNET.AUTHENTICATION_SERVICES=(NONE) since we migrated to Active Directory.
So...is this setting to (NTS) mandatory before migrating my database ? If Yes Should I simply set it to NTS and then set it back to NONE before re-strating my database ?
Thanks for any advice.
Regards,
GuillaumeHi,
"Shared Memory Realm does not exist" error might come from some initialization parameter in init.ora file not set properly most probably these are "memory" related parameters. init.ora file was configured with diffirent memory value.
Data sheet -
ORA-01034: ORACLE not available and ORA-27101: shared memory realm does not
Hi guys:
I´m getting the error:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
I tried setting my oracle_sid and i still having the error. Then i do the follow
C:\infra\BIN>sqlplus "/as sysdba"
SQL*Plus: Release 10.1.0.3.0 - Production on Vie Ago 18 09:49:58 2006
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORA-02778: Name given for the log directory is invalid
SQL> startup c:\infra
SP2-0714: invalid combination of STARTUP options
SQL> startup c:\infra\database\SPFILEBDINFRA.ora
SP2-0714: invalid combination of STARTUP options.
I tried stopping and starting my listener and it doesn't work. Can anybody Help me??
I´m working with db 10gr1 in windows 2000
Thanks
AlexIf i try:
SQL> startup unmount
SP2-0714: invalid combination of STARTUP options
SQL> startup mount
ORA-02778: Name given for the log directory is invalid
Here my spfile
*.background_dump_dest='C:\admin\bdinfra\bdump'
*.compatible='10.1.0.2.0'
*.control_files='C:\infra\oradata\bdinfra\CONTROL01.CTL','C:\infra\oradata\bdinfra\CONTROL02.CTL','C:\infra\oradata\bdinfra\CONTROL03.CTL'
*.core_dump_dest='C:\admin\bdinfra\cdump'
*.db_block_size=8192
*.db_cache_size=50331648
*.db_domain='com'
*.db_file_multiblock_read_count=16
*.db_name='bdinfra'
*.dispatchers='(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)','(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)'
*.java_pool_size=67108864
*.job_queue_processes=5
*.large_pool_size=8388608
*.max_commit_propagation_delay=0
*.open_cursors=300
*.pga_aggregate_target=33554432
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=400
*.shared_pool_size=150994944
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='C:\admin\bdinfra\udump'
Thank you very much for your help
Oscar
Maybe you are looking for
-
Pages 3.0.3 will not open in Pages 5.5.2
I just purchased a new iMac and Pages documents (version 3.0.3) will not open in Pages 5.5.2 in my new iMac. Is there a way to open them? I am using OS X 10.10.2. TIA
-
HT201317 How do I link IPhoto to ICloud?
How do I link IPhoto to ICloud?
-
I need adobe to play my Facebook games how do I get it.
I'm trying to download adobe flash or I swifter and it keeps telling me I can't connect to the apple store.
-
LrPhoto:getContainedPublishedCollections() to work with Smart Collection
I have customized the original flickr publish plugin to suit my liking and to model the behavior of Flickr especially the Photostream and Photosets concept. I love the improve plugin i created very much. When I started to work with smart collections
-
Can't access Time Machine backups after erase and install
Had iMac checked by Apple Authorized Service Provider after experiencing multiple problems I couldn't fix. No hardware problem. He did an erase and install of 10.5.8. The machine accesses my Time Machine backup, but in Time Machine/ iMacHome/ Sharing