FOTRAN and Shared Memory
I'm currently working with a large simulation involving mixed language Fortran and C executable programs (like 10 different executables running
together in real-time).
I now need to port the code from SUN-3 (SunOS 4.0) over to a modern Enterprise 250 server. Since I' porting I'd like to have the multitude of executables to use the POSIX compliant calls shm_open and mmap to map the appropriate common blocks. Unfortunately, my search of Dejanews and other internet sources have shown this to be a widely debated topic without much in the way of answers.
So far, I've written fortran callable c-stubs to do the shm_open and mmap calls but I can't get mmap to successfully take the address I specify. Also since the other fortran programs will have copies of the same common blocks the obvious interface issue comes up, how to I map onto the same common block (shared memory region) for all programs?
My target platform is UltraSparc II running Solaris 2.7. Any hints?
One possible problem is that Fortran passes everything by reference (by
default) and the C routines that you are calling may want some of the
data to be passed by value.
Another could be that the mmap doesn't want to remap the common blocks
if they are already mapped. If that is the case, you can try mapping unused
space and then using either Fortran 95 (preferred) or Cray pointers to point
to the space. If you make it work with Fortran 95 pointers, you won't even
need much in the way of syntax changes in your program.
Similar Messages
-
What is semaphore and shared memory
Hello Gurus,
what is Semaphores and shared memory. what is the use of setting the parameter
SHMMAX
SHMMIN
SEMMIN while installing Oracle software in Linux
Regards
HameedHello,
I would advise you to review Oracle Metalink Document: Semaphores and Shared Memory - An Overview : Doc ID: Note:153961.1.
https://metalink.oracle.com/metalink/plsql/f?p=130:14:12007188755102069423::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,153961.1,1,1,1,helvetica
Oracle Metalink Document: TECH: Unix Semaphores and Shared Memory Explained Doc ID: Note:15566.1.
https://metalink.oracle.com/metalink/plsql/f?p=130:14:12007188755102069423::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,15566.1,1,1,1,helvetica
Additionally you can review Oracle Metalink Document: Linux: How to Check Current Shared Memory, Semaphore Values Doc ID: Note:226209.1.
https://metalink.oracle.com/metalink/plsql/f?p=130:14:12007188755102069423::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,226209.1,1,1,1,helvetica
Hope it helps.
Adith -
Semaphores and shared memory's classes
Hi there,
I looking for the classes used to deal with semaphores and shared memory. Does anyone know which are these classes?
ThanksThe file mapping capability added in 1.4 may result in shared memory but this is not guaranteed. You can use JNI to call some C code which allocates shared memory and then uses the NewDirectByteBuffer method to return a ByteBuffer representing that shared memory to the Java code.
You would also need to use JNI to wrap system provided semaphore or mutex operations to provide cross process synchronization.
However, do your communication needs really require performance in excess of that available using sockets? I can get at least 7MB/s between two processes on a rather modest machine. -
Segment memory and shared memory
hi guys
is it possible to know the path of "Segment memory and shared memory " on OS level...........
i don't know exact forum for this question so i post it here
please help me !If are you using Linux you can use ipcs to get all shared memory segments at system level:
$ ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x0000f47a 65536 root 777 512000 4
0x26278554 163842 oracle 640 132120576 16
0xac004cf0 196611 oracle 660 16108224512 128The Oracle executable sysresv uses ORACLE_SID env. var to map shared memory segment to the current instance:
$ sysresv
IPC Resources for ORACLE_SID "XXX1" :
Shared Memory:
ID KEY
196611 0xac004cf0
Semaphores:
ID KEY
229377 0x2dac12a4
Oracle Instance alive for sid "XXX1" -
ORA-01031: insufficient privileges and shared memory realm does not exist
Hi all,
I came to a dead end to start oracle 10.2 database. I have searched on google and this forum, none of these solutions work for me. PS, I have installed 11g on my machine too.
I have set up ORACLE_SID,ORACLE_HOME to 10.2 database based on the tnsnames.ora.
follow is error message:
sqlplus sys as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Apr 3 02:09:54 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter password:
ERROR:
ORA-01031: insufficient privileges
sqlplus /nolog
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Apr 3 02:10:55 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL> conn / as sysdba
ERROR:
ORA-01031: insufficient privileges
SQL> conn scott/tiger
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
First I thought the instance has been start yet, but since I can't login with sysdba. I don't know what other options.
For 10.2, the tnsnames.ora
ORA102 =
+(DESCRIPTION =+
+(ADDRESS = (PROTOCOL = TCP)(HOST =XXX)(PORT = 1523))+
+(CONNECT_DATA =+
+(SERVER = DEDICATED)+
+(SERVICE_NAME = ora102)+
+)+
+)+
LISTENER_ORA102 =
+(ADDRESS = (PROTOCOL = TCP)(HOST =XXX)(PORT = 1523))+
EXTPROC_CONNECTION_DATA =
+(DESCRIPTION =+
+(ADDRESS_LIST =+
+(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC2))+
+)+
+(CONNECT_DATA =+
+(SID = PLSExtProc)+
+(PRESENTATION = RO)+
+)+
+)+
listener.ora:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /data/oracle/ora102)
(PROGRAM = extproc)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC2))
(ADDRESS = (PROTOCOL = TCP)(HOST =XXXXX)(PORT = 1523))
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)try do this steps on server side:
1) sqlplus sys as sysdba
2) select open_mode from v$database;
show result 2 step -
Solaris 10, Oracle 10g, and Shared Memory
Hello everyone,
We've been working on migrating to Solaris 10 on all of our database servers (I'm a UNIX admin, not a DBA, so please be gentle) and we've encountered an odd issue.
Server A:
Sun V890
(8) 1.5Ghz CPUs
32GB of RAM
Server A was installed with Solaris 10 and the Oracle data and application files were moved from the old server (the storage hardware was moved between servers). Everything is running perfectly, and we're using the resource manager to control the memory settings (not /etc/system)
The DBAs then increase the SGA of one of the DBs on the system from 1.5GB to 5GB and it fails to start (ORA-27102). According to the information I have, the maximum shared memory on this system should be 1/4 of RAM (8 GB, actually works out to 7.84 GB according to prctl). I verified the other shared memory/semaphore settings are where they should be, but the DB would not start with a 5 GB SGA. I then decided to just throw a larger max shared memory segment at it, so I used the projmod to increase project.max-shm-memory to 16GB for the project Oracle runs under. The DB now starts just fine. I cut it back down to 10GB for project.max-shm-memory and the DB starts ok. I ran out of downtime window, so I couldn't continue refining the settings.
Running 'ipcs -b' and totalling up the individual segments showed we were using around 5GB on the test DB (assuming my addition is correct).
So, the question:
Is there a way to correlate the SGA of the DB(s) into what I need the project.max-shm-memory to? I would think 7.84GB would be enough to handle a DB with 5GB SGA, but it doesn't appear to be. We have some 'important' servers getting upgraded soon and I'd like to be able to refine these numbers / settings before I get to them.
Thanks for your time,
StevenTo me, setting a massive shared memory segment just seems to be inefficient. I understand that Oracle is only going to take up as much memory (in general) as the SGA. And I've been searching for any record of really large shared memory segments causing issues but haven't found much (I'm going to contact Sun to get their comments).
The issue I am having is that it doesn't make sense that the DB with a 5GB SGA is unable to startup when there is an 8GB max shared memory segment, but a 10GB (and above) seems to work. Does it really need double the size of the SGA when starting up, but 'ipcs' shows it's only using the SGA amount of shared memory? I have plans to cut it down to 4GB and test again, as that is Oracle's recommendation. I also plan to run the DB startup through truss to get a better handle on what it's trying to do. And, if it comes down to it, I'll just set a really big max shared memory segment, I just don't want it to come back and cause an issue down the road.
The current guidance on Metalink still seems to be suggesting a 4GB shared memory segment (I did not get a chance to test this yet with the DB we're having issues with).
I can't comment on how the DBA specifically increased the SGA as I don't know what method they use. -
Migration 8i to 9iR2 and "Shared Memory Realm does not exist"
Dear Experts,
I read from Note 159657.1 in step 22
If you are using a passwordfile set the parameter
REMOTE_LOGIN_PASSWORDFILE=NONE (If you are using windows also set the
parameter SQLNET.AUTHENTICATION_SERVICES=(NTS) in the sqlnet.ora file.
The issue is I can't restart my instance in step 32 (same note) because I need to have SQLNET.AUTHENTICATION_SERVICES=(NONE) since we migrated to Active Directory.
So...is this setting to (NTS) mandatory before migrating my database ? If Yes Should I simply set it to NTS and then set it back to NONE before re-strating my database ?
Thanks for any advice.
Regards,
GuillaumeHi,
"Shared Memory Realm does not exist" error might come from some initialization parameter in init.ora file not set properly most probably these are "memory" related parameters. init.ora file was configured with diffirent memory value.
Data sheet -
Hi,
I'm using an Ultra 10 running Solaris 8.
I've set the /etc/system file as follow:
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmns=200
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767
forceload: sys/msgsys
forceload: sys/semsys
forceload: sys/shmsys
But it seems to be ignored. The command "ipcs" return:
IPC status from <running system> as of Tue Jun 18 08:58:13 GMT 2002
T ID KEY MODE OWNER GROUP
Message Queues:
Shared Memory:
m 0 0x50000392 rw-rr-- root root
Semaphores:
Due to this I can not run Oracle 8.1.7.
What is wrong? What should I do?
Thanks in advance for your support.
PatrickI got a similar problem on Solaris 9 while trying to get Ingres to run. Ingres comes with a utility named "syscheck" that checks how many semaphores and shared memory segments are available. It complains that there are a total of 0 shared memory segments available in the system (I've set the number to 3) and that there are a total of 30 semaphores in the system (I've set the number to 35).
However, the sysdef output shows exactly the values that are set in /etc/system. So, do I face a compatibility problem here (Ingres ran fine on Solaris 7) or is it a problem in my system configuration? How can I determine that?
Thanks in advance
Gregor -
Shared memory settings from 32 to 64 bit Oracle
Hello,
Questions about Oracle and Shared Memory from 32bit to 64bit Oracle
I have a Sun Sparc V880 server running Solaris 9 with 8xCPUs and 16gb of Ram. It is currently running 6 Oracle (32 bit) 9.2 databases which we are planning to upgrade to 10.2. My question regards the setting of the Solaris 9 kernel parameters speciifically 'shminfo_shmmax'. This is currently set to - 'shmsys:shminfo_shmmax=4294967295'which I believe is the maximum amount of shared memory which can be allocated to a 32bit version of Oracle and the one recommended by Oracle.
Running 'show sga' on all of the current 9.2 databases returns
Total System Global Area
DB SGA(bytes)
db1 772528008
db2 789305244
db3 772528008
db4 114021228
db5 791238556
db6 789305244
Total = 4045703524 bytes or 3.76786 gb.
1. Does this mean that each Oracle 32bit database can only use up to a maximum of about 4gb of shared memory or that all 6 together can only use up to 4gb ? resulting in 12gb being available for other (non Oracle) processes.
2. I have noticed (through 'sar' stats) that the server occasionally pages (non zero values for pgscan/s and pgfree/s etc.). Since the server mostly only runs the Oracle databases, is this because Oracle can only use circa 4gb of Ram before it starts paging ?
3. If both 9.2 and 10.2 Oracle databases are run on the server would increasing the value of shminfo_shmmax then allow the 10.2 databases to use more of the 16gb of Ram, while still limiting the 32bit databases to the 4gb limit ?
Any help here would be appreciatedPl see if MOS Doc 467960.1 (How Important It Is To Set shmsys:shminfo_shmmax Above 4 GB) and the notes it references can help
HTH
Srini -
Read & Write Shared memory with Java
Hi,
I have to read and write from/to shared memory four integers with Java, using Posix package (the shared memory is already created).
I've looking in google for any examples that attachs, read, and write to shared memory using that package but i couldn't find anything (that's the reason to write in this forum).
Posix package link
http://www.bmsi.com/java/posix/docs/Package-posix.html
Operative System
Suse 10
Could anyone help me with any example, please?
Thank you very much
OscarHi, i can't post any code because i have no idea about posix and shared memory.
I came from web world and my company has send me to space department, two weeks ago, that use Unix as its operative system.
The first thing i have to do it's what i post but i don't know even how to start doing it, because this is almost the first time i hear about reading and writing shared memory.
Java is a high level and non-dependent platform (but this kind of things i think are dependent platform, in this case the opearative system) and it's few time since i am working with this operative system.
That's the trouble i don't know how to start working with this.
Thanks again -
Questions about db_keep_cache_size and Automatic Shared Memory Management
Hello all,
I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
SQL> select name, value, value/1024/1024 value_MB from v$parameter
2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
NAME VALUE VALUE_MB
sga_max_size 1694498816 1616
shared_pool_size 0 0
db_cache_size 0 0
db_keep_cache_size 0 0
db_recycle_cache_siz 0 0
e
Looking at granularity level:
SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
GRANULE_SIZE/VALUE
2048
Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1616M
sga_target big integer 0
So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
If that isn't the way to go...let me continue with the table size, etc....
The table I need to pin is:
SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
SUM(BLOCKS)
4858
And block size is:
SQL> show parameter block_size
NAME TYPE VALUE
db_block_size integer 8192
So, the space I'll need in memory for pinning this is:
4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
Thanks in advance for any suggestions and links to info on this.
cayenne
Edited by: cayenne on Mar 27, 2013 10:14 AM
Edited by: cayenne on Mar 27, 2013 10:15 AMJohnWatson wrote:
This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
Thanks,
cayenne -
Hello, I have two questions on time capsule I can only have it on my external hd files and free up my internal memory to my mac I can use an external hard drive, in my case a lacie rugged as shared memory for my two computers
I have a mackbook pro and an iMac if I buy a time capsule 2tb airport, I can use it with time machine and what would be the best way to use it.
There is no particular setup required for TM.. both computers will create their own backup sparsebundle which is like a virtual disk.. Pondini explains the whole thing if you read the reference I gave you.
and how to use time capsule airport whit other external hd to use my old lacie airport with the new time capsule
Up to you.. you can plug the external drive into the TC and enjoy really slow file transfers or you can plug it into your computer and use it as external drive.. which is faster than the TC.. and TM can include it in the backup.
Again everything is explained in the reference.. you are not reading it. -
ORA-27101: shared memory realm does not exist and ORA-01139: RESETLOGS
HI ALL,
After the oracle has installed on a solaries server successfully, i am unable to login into oracle database.
Below are the errors messages :
1) ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
SVR4 Error: 2: No such file or directory
2) Media recovery complete.
alter database open resetlogs
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery.
and When i try to login into oracle as root .
this is the error :
sqlplus / as sysdba
SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 12 15:52:49 2010
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
ld.so.1: oracle: fatal: libskgxp10.so: open failed: No such file or directory
ERROR:
ORA-12547: TNS:lost contact
my Operating system details :
SunOS Blade3-Chassis2 5.10 Generic_142900-03 sun4v sparc SUNW,Netra-CP3060
Anyone please help me here. waiting for your reply.
Below are the ORACLE_SID Details :
1) echo $ORACLE_SID ----- > ems
2)ps -eaf | grep smon -------> ora_smon_ems
do you need any data?
Edited by: user8959150 on Feb 11, 2010 7:19 PMHi
We mostly receive this error message ( ORA-27101: shared memory realm does not exist and ORA-01139)
When our SGA_MAX_SIZE is greater than 1.7 G.b on
You again configure your sga size
and sga_max_size paramete will not greater than 1.7 G.b
Hope your database will work correctly.
for test purpose you set all your init.ora parameter
db_cache_size=512m
db_keep_cache_size=64m
db_recycle_cache_size=16m
java_pool_size=8m
large_pool_size=16m
shared_pool_size=256m
shared_pool_reserved_size=64m
sga_max_size=1600m
sga_max_size will not be grater than combination of of db_cache_size db_keep_cache_size,db_recycle_cache_sizejava_pool_size+share_pool_size+share_pool_reserved_size
Hope it will work fine.
Regard
Abdul Hameed Malik
[email protected] -
Enhanced protected mode and global named shared memory object
Good morning.
I've written a bho that do data exchange with a system service. The service creates named shared memory objects in the Global namespace. Outside appcontainer IE 11 sandboxed everything works fine lowering objects integrity level. Inside the sandboxed environment
OpenFileMappingW seems to return a valid handle but the calls to MapViewOfFile always gives access denied. What i'm missing? Thank you.
Service code for security descriptor creation:
if (InitializeSecurityDescriptor(pSA->lpSecurityDescriptor, SECURITY_DESCRIPTOR_REVISION))
if (ConvertStringSecurityDescriptorToSecurityDescriptorW(L"D:P(A;;GA;;;WD)(A;;GA;;;AC)", SDDL_REVISION_1, &pSecDesc, NULL) == TRUE)//
BOOL fAclPresent = FALSE;
BOOL fAclDefaulted = FALSE;
if (GetSecurityDescriptorDacl(pSecDesc, &fAclPresent, &pDacl, &fAclDefaulted) == TRUE)
bRetval = SetSecurityDescriptorDacl(pSA->lpSecurityDescriptor, TRUE, pDacl, FALSE);
if (bRetVal ==TRUE && ConvertStringSecurityDescriptorToSecurityDescriptorW(L"S:(ML;;NW;;;LW)", SDDL_REVISION_1, &pSecDesc, NULL) == TRUE)
BOOL fAclPresent = FALSE;
BOOL fAclDefaulted = FALSE;
if (GetSecurityDescriptorSacl(pSecDesc, &fAclPresent, &pSacl, &fAclDefaulted) == TRUE)
bRetval = SetSecurityDescriptorSacl(pSA->lpSecurityDescriptor, TRUE, pSacl, FALSE);
OutputDebugStringW(L"SACL ok.");
return bRetval;
BHO code
LPMEMORYBUFFER OpenDataChannel(HANDLE *hQueue)
LPMEMORYBUFFER lp = NULL;
WCHAR data[512] = { 0 };
for (int a = 0;; a++)
if(iestatus==FALSE)StringCchPrintfW(data, 512,L"Global\\UrlfilterServiceIE.%d", a);//NOT in EPM
else StringCchPrintfW(data, 512, L"%s\\Global\\UrlfilterServiceIE.%d",appcontainernamedobjectpath, a);//in EPM
*hQueue = OpenFileMappingW(FILE_MAP_ALL_ACCESS, TRUE, data);//FILE_MAP_ALL_ACCESS
if (*hQueue != NULL)
//file mappato esistente
lp = (LPMEMORYBUFFER)MapViewOfFile(*hQueue, FILE_MAP_ALL_ACCESS, 0, 0, sizeof(MEMORYBUFFER));//FILE_MAP_ALL_ACCESS
if (lp != NULL)Ciao Ritchie, thanks for coming here. ;-)
I call (only) OpenFileMapping and MapViewOfFile inside my code and i get access denied at the first try. As i stated before this happens when IE11 is working inside EPM 32/64bit mode, outside EPM it works as it should. However i decided to take a different
approach to do IPC, because, you want it or not, the service is up and running as SYSTEM... Still i'm really interested about two points:
1) can i use global kernel objects in EPM mode?
2) if one is "yes, you can": what's wrong with my code? The security descriptor? Something else?
Thanks all. -
SHARED MEMORY AND DATABASE MEMORY giving problem.
Hello Friends,
I am facing problem with EXPORT MEMORY and IMPORT MEMORY.
I have developed one program which will EXPORT the internal table and some variables to the memory.This program will call another program via background job. IMPORT memory used in another program to get the first program data.
This IMPORT command is working perfect in foreground. But it is not working in background.
So, I have reviewed couple of forums and I tried both SHARED MEMORY AND DATABASE MEMORY. But no use. Still background is giving problem.
When I remove VIA JOB parameter in the SUBMIT statement it is working. But i need to execute this program in background via background job. Please help me . what should I do?
pls find the below code of mine.
option1
EXPORT TAB = ITAB
TO DATABASE indx(Z1)
FROM w_indx
CLIENT sy-mandt
ID 'XYZ'.
option2
EXPORT ITAB FROM ITAB
TO SHARED MEMORY indx(Z1)
FROM w_indx
CLIENT sy-mandt
ID 'XYZ'.
SUBMIT ZPROG2 TO SAP-SPOOL
SPOOL PARAMETERS print_parameters
WITHOUT SPOOL DYNPRO
*_VIA JOB name NUMBER number*_
AND RETURN.
===
Hope every bidy understood the problem.
my sincere request is ... pls post only relavent answer. do not post dummy answer for points.
Thanks
RaghuHi.
You can not exchange data between your programs using ABAP memory, because this memory is shared between objects within the same internal session.
When you call your report using VIA JOB, a new session is created.
Instead of using EXPORT and IMPORT to memory, put both programs into the same Function Group, and use global data objects of the _TOP include to exchange data.
Another option, is to use SPA/GPA parameters (SET PARAMETER ID / GET PARAMETER ID), because SAP memory it is available between all open sessions. Of course, it depends on wich type of data you want to export.
Hope it was helpful,
Kind regards.
F.S.A.
Maybe you are looking for
-
How do I install Adobe Air SDK for CS5.5 Products?
Hey how would I install Adobe Air SDK 3.7 for Flash Pro CS5.5 and Flash Builder 4.6?
-
HT4788 And what of Mountain Lion
Is there a difference doing this on Mountain Lion as i tried it and it did not work?
-
My Ipod touch won't get into factory settings
When I try to restore my IPOD Touch, it loads and then its says that mty IPOD will restart and go into factory settings mode but then it goes back to the apple logo on my IPOD touch. Anyone out there have a solution for me?
-
How can you tell what pictures from iPhoto you already used in iMovie?
how can you tell what pictures from iPhoto you already used in iMovie?
-
iMovi cant find the iDVD program. I use the OSX Lion software on a 27" iMac. I assumed the iDVD program is part of the standard pack.