Data Federator: Insufficient memory : Operator 'HashJoin' cannot execute
Hi,
I'm using: SAP BusinessObjects Data Federator Designer XI 3.0 Service Pack 3 - 12.3.0.0 (Build 1011241842)
I have a virtual view on top of sql server 2007 and teradata. I am using a view on sql server 2007 that joins against several tables and querying a single table from Teradata.
When I try to query the view I get the error: [Data Federator Driver] [Server] Insufficient memory : Operator 'HashJoin' cannot execute because it cannot allocate the minimal number of memory pages. Please contact the system administrator to change system parameter settings. (0)
This happens when I try to query the view from data federator or from an odbc connection.
I tried playing with the configuration settings for core.queryEngine.hash.maxPartitions and
core.queryEngine.hashJoin.nbPartitions and nothing seemed to work in terms of changing/increasing the numbers. The help guide also gave no help: http://help.sap.com/businessobject/product_guides/boexi3SP3/en/xi3_sp3_df_userguide_en.pdf
I was using a very similiar query against a materialized view on Oracle and Teradata before and that worked. But I'd prefer not to have to materialize the view in Sql Server.
Any thoughts as to how this can be fixed?
Thanks,
Kerby
<rant>The strange thing with this forum is no people, really no people, ever do anything to resolve their own problems, even though this forum has a search function, and sites with FAQs and assistance are floating around over the Internet. Why is a mystery, as nobody uses them, and most DBAs only know how to hit copy-and paste, to add to the clutter on OTN again.</rant>
The usual steps to troubleshoot an ora-1031 apply.
This means
- verify whether the local administrator is in the ora_dba group and sqlnet.authentication_services in sqlnet.ora has been set to (NTS)
- in absence of these, there should a password file be present in %ORACLE_HOME%\database, named pw%SID%.ora
These belong to the common documented requirements, and these requirements shouldn't be repeated everywhere.
Sybrand Bakker
Senior Oracle DBA
Experts: Those who do read documentation.
Similar Messages
-
BODI - Job Error " blank message,possibly due to insufficient memory
Ending up with below error while executing a DI job. The job uses a file of size 37 MB as source. As the error looks something related to memory. I have tried splitting up the input file into and executed the job. The job completed successfully without any error.
Could someone help me out to find any memory setting which needs to be investigated in the dataflow level to get a permanent solution for it.
Expecting for help !!!
(11.7) 03-04-11 08:18:06 (E) (21097:0001) RUN-050406: |Session SM_DM_ACCESS_LOG_DTL_F_JOB|Workflow SM_DM_ACCESS_LOG_DTL_F_WF|Dataflow SM_DM_ACCESS_LOG_DTL_F_DF
Data flow <SM_DM_ACCESS_LOG_DTL_F_DF> received a bad system message. Message text from the child process is <blank message, possibly due to insufficient memory>. The process executing data flow <SM_DM_ACCESS_LOG_DTL_F_DF> has died abnormally. For NT,
please check errorlog.txt. For HPUX, please check stack_trace.txt. Please notify Customer Support.
(11.7) 03-04-11 08:18:06 (E) (21097:0001) RUN-050409: |Session SM_DM_ACCESS_LOG_DTL_F_JOB|Workflow SM_DM_ACCESS_LOG_DTL_F_WF
The job process could not communicate with the data flow <SM_DM_ACCESS_LOG_DTL_F_DF> process. For details, see previously
logged error <50406>.
(11.7) 03-04-11 08:18:06 (E) (21097:0001) RUN-050409: |Session SM_DM_ACCESS_LOG_DTL_F_JOB|Workflow SM_DM_ACCESS_LOG_DTL_F_WF
The job process could not communicate with the data flow <SM_DM_ACCESS_LOG_DTL_F_DF> process. For details, see previously
logged error <50406>.Hi,
loading a 37MB file shouldnt be a problem without splitting it. i´ve loaded GB size flatfiles without problems.
Did you checked the error.txt as stated in the message? Whats in there.
If you split the file and you can load it, you have enough space in your DB.
Please check the memory utilization of your server during executing the job with one file. Maybe the Server is too busy...what would be strange with a 37MB file.
Regards
-Seb. -
came in this fine monday morning and it looks like developers were running some kind of trace that filled the primary DATA folder with about 80,000 5mb trace files. now that process has stopped and the logfiles have been cleaned up, but when attempting to connect to the server using the management console i get the error:
Database 'msdb' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. (.Net SqlClient Data Provider)
when i check the status it is in recovery pending mode. i have a backup from yesterday but im not sure if this database became corrupt before that backup or not because this process was ongoing over the weekend. the last timestamp on the msdb and log files is 7am this morning..
I am not sure how to proceed recovering the msdb database with limiting interruption to the users. any advice is extremely appreciated. this is sql server 2008. i can login via sqlcmd and see it is in recovery pending status:
1> select state_desc databasestatus_sysdtabase from
2> sys.databases where name ='msdb'
3> go
select state_desc databasestatus_sysdtabase from
sys.databases where name ='msdb'
databasestatus_sysdtabase
RECOVERY_PENDING
(1 rows affected)
1>For someone experiencing a similar problem this answer is unacceptable. You're basically telling me to reboot the server to fix this. You vaguely mention using process explorer to find out who is using the file. Can you please provide
some more information on that? I have a similar problem and rebooting the server to fix it is not an option. This problem keeps reoccurring.
OS Error 32 means file has an open handle by someone else. If that is a user process then you can catch it via Process Explorer. If it's a kernel mode object then it's difficult to catch that and restart is the only choice.
http://sqlserver-help.com/2014/08/07/tips-and-tricks-os-error-32the-process-cannot-access-the-file-because-it-is-being-used-by-another-process/
Balmukund Lakhani
Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
This posting is provided "AS IS" with no warranties, and confers no rights.
My Blog |
Team Blog | @Twitter
| Facebook
Author: SQL Server 2012 AlwaysOn -
Paperback, Kindle -
ORA-12721: operation cannot execute when other sessions are active
Hi,
I started my DB like following :
1) Change INIT.ORA file; unset parallel_server parameter.
2) Execute these commands:
STARTUP MOUNT ;
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
ALTER SYSTEM SET AQ_TM_PROCESSES=0;
ALTER DATABASE OPEN;
SHUTDOWN IMMEDIATE;
SQL> STARTUP RESTRICT pfile='C:\oracle\product\10.2.0\db_1\database\initORCL.ora';
ORACLE instance started.
SQL> alter database national character set INTERNAL_CONVERT UTF8;
alter database national character set INTERNAL_CONVERT UTF8
ERROR at line 1:
ORA-12721: operation cannot execute when other sessions are activeWhy this error when DB is opened in strict and I'm the only user ?
SQL> select count (*) from v$session;
COUNT(*)
20Any solution ?
Thank you.Hi
This operation is dangerous, please ensure that you have a full backup before doing that operation.
Please use that order :
SHUTDOWN IMMEDIATE;
-- make sure there is a database backup you can rely on, or create one
STARTUP MOUNT;
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
ALTER SYSTEM SET AQ_TM_PROCESSES=0;
ALTER DATABASE OPEN;
ALTER DATABASE CHARACTER SET <new_character_set>;
-- a alter database takes typically only a few minutes or less,
-- it depends on the number of columns in the database, not the
-- amount of data.
SHUTDOWN;
Please note that :
The command requires the database to be
open but only one session, the one executing the command, is allowed.
For the above error conditions Oracle9i will report one of the errors:
ORA-12719: operation requires database is in RESTRICTED mode
ORA-12720: operation requires database is in EXCLUSIVE mode
ORA-12721: operation cannot execute when other sessions are active
Oracle9i can also report:
ORA-12718: operation requires connection as SYS
if you are not connect as SYS (INTERNAL, "/ AS SYSDBA").
Let us know if this helps.
regards,
Hub
Edited by: Hub on Dec 10, 2008 1:22 PM -
Cannot attach data store shared-memory segment using JDBC (TT0837)
I'm currently evaluating TimesTen during which I've encountered some problems.
All of the sudden my small Java app fails to connect to the TT data source.
Though I can still connect to the data source using ttisql.
Everything worked without problems until I started poking around in the ODBC administrator (Windows 2K).
I wanted to increase permanent data size so I changed some of the parameters.
After that my Java app fails to connect with the following message:
DriverManager.getConnection("jdbc:timesten:direct:dsn=rundata_tt60;OverWrite=0;threadsafe=1;durablecommits=0")
trying driver[className=com.timesten.jdbc.TimesTenDriver,com.timesten.jdbc.TimesTenDriver@addbf1]
SQLException: SQLState(08001) vendor code(837)
java.sql.SQLException: [TimesTen][TimesTen 6.0.4 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 8846, procedure "sbDbConnect()"
The TT manual hasn't really provided any good explanation what the error code means.
Obviusly I'v already tried restoring the original ODBC parameters without any luck.
Ideas..anyone?
/PeterPeter,
Not sure if you have resolved this issue or not. In any case, here are some information to look into.
- On Windows 32-bit, the allocation of shared data segment doesn't work the same way like on Unix and Linux. As a result, the maximum TimesTen database size one can allocate is much smaller on the Windows platform than on other platforms.
- Windows error 8 means ERROR_NOT_ENOUGH_MEMORY: not enough storage is available to process this command.
- TimesTen TT0837 says the system was unable to attach a shared memory segment during a data store creation or data store connection operation.
- What was the largest successful perm-size and temp-size you used when allocating the TimesTen database?
* One explanation for why you were able to connect using ttIsql is that it doesn't use much of the DLLs, whereas your Java application typically has a lot more DLLs.
* As a troubleshooting step, you can try reduce your Temp-size to a very small size and just see if you can connect to the data store. Eventually, you may need to reduce your perm-size to get Windows to fit the shared data segment in the process space.
By the way the TimesTen documentation has been modified to document this error as follows:
Unable to attach to a shared memory segment during a data store creation or data store connection operation.
You will receive this error if a process cannot attach to the shared memory segment for the data store.
On UNIX or Linux systems, the shmat call can fail due to one of:
- The application does not have access to the shared memory segment. In this case the system error code is EACCESS.
- The system cannot allocate memory to keep track of the allocation, or there is not enough data space to fit the segment. In this case the system error code is ENOMEM.
- The attach exceeds the system limit on the number of shared memory segments for the process. In this case the system error code is EMFILE.
It is possible that some UNIX or Linux systems will have additional possible causes for the error. The shmat man page lists the possibilities.
On Windows systems, the error could occur because of one of these reasons:
- Access denied
- The system has no handles available.
- The segment cannot be fit into the data section
Hope this helps.
-scheung -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
Insufficient memory to perform operation
When trying to open large Tab demited text files (70Mb - 370Mb) using the "Read from Spreadsheet file" vi I recieve a message stating that there is "Memory is full" followed by "insufficient memory to perform operation" . It is strange because these files open normally with excel. I get the same fault irrespective of the VI used to read from the text file . Please note that the code simply reads and displays with no other operations taking place. Any ideas please.
Hi caperng,
have you read through the KB articles provided by muks?
Let's do some calculations with your 370MB text file:
- loading the file into a string: takes 370MB
- converting the string to an array: takes (at minimum) another 370MB chunk of memory (yes, "Spreadsheet string to array" takes a lot of memory!!!)
- displaying the array in an indicator: takes (at minimum) another 370MB chunk of memory (each indicator has it's own copy of the data!)
So:
"code simply reads and displays" a 370MB textfile takes (at minimum) 1110MB of memory...
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
hi,
am running the below command for moving sql serevr mdf and ldf files from one drive to another : c drive to d drive:
but am getting the below error
SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\abc.mdf". Operating system error 2: "2(The system cannot find the file specified.)".
use master
DECLARE @DBName nvarchar(50)
SET @DBName = 'CMP_143'
DECLARE @RC int
EXEC @RC = sp_detach_db @DBName
DECLARE @NewPath nvarchar(1000)
--SET @NewPath = 'E:\Data\Microsoft SQL Server\Data\';
SET @NewPath = 'D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\';
DECLARE @OldPath nvarchar(1000)
SET @OldPath = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\';
DECLARE @DBFileName nvarchar(100)
SET @DBFileName = @DBName + '.mdf';
DECLARE @LogFileName nvarchar(100)
SET @LogFileName = @DBName + '_log.ldf';
DECLARE @SRCData nvarchar(1000)
SET @SRCData = @OldPath + @DBFileName;
DECLARE @SRCLog nvarchar(1000)
SET @SRCLog = @OldPath + @LogFileName;
DECLARE @DESTData nvarchar(1000)
SET @DESTData = @NewPath + @DBFileName;
DECLARE @DESTLog nvarchar(1000)
SET @DESTLog = @NewPath + @LogFileName;
DECLARE @FILEPATH nvarchar(1000);
DECLARE @LOGPATH nvarchar(1000);
SET @FILEPATH = N'xcopy /Y "' + @SRCData + N'" "' + @NewPath + '"';
SET @LOGPATH = N'xcopy /Y "' + @SRCLog + N'" "' + @NewPath + '"';
exec xp_cmdshell @FILEPATH;
exec xp_cmdshell @LOGPATH;
EXEC @RC = sp_attach_db @DBName, @DESTData, @DESTLog
go
can anyone pls help how to set the db offline. currently i stopped the sql server services from services.msc and started the sql server agent.
should i stop both services for moving from one drive to another?
note: I tried teh below solution but this didint work:
ALTER DATABASE <DBName> SET OFFLINE WITH ROLLBACK IMMEDIATE
Update:
now am getting the message :
Msg 15010, Level 16, State 1, Procedure sp_detach_db, Line 40
The database 'CMP_143' does not exist. Supply a valid database name. To see available databases, use sys.databases.
(3 row(s) affected)
(3 row(s) affected)
Msg 5120, Level 16, State 101, Line 1
Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\CMP_143.mdf". Operating system error 2: "2(The system cannot find the file specified.)".First you should have checked the database mdf/ldf name and location by using the command
Use CMP_143
Go
Sp_helpfile
Looks like your database CMP_143 was successfully detached but mdf/ldf location or name was different that is why it did not get copied to target location.
Database is already detached that’s why db offline failed
Msg 15010, Level 16, State 1, Procedure sp_detach_db, Line 40
The database 'CMP_143' does not exist. Supply a valid database name. To see available databases, use sys.databases.
EXEC @RC = sp_attach_db @DBName, @DESTData, @DESTLog
Attached step is failing as there is no mdf file
Msg 5120, Level 16, State 101, Line 1
Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\CMP_143.mdf". Operating system error 2: "2(The system cannot find the file specified.)"
Solution:
Search for the physical files(mdf/ldf) in the OS and copy to target location and the re-run sp_attach_db with right location and name of mdf/ldf. -
I have the following:-
VMWare workstation version 9 , with windows server 2008 R2 data center installed.
- I have installed the windows 2008 R2 inside the VM using an iso image.
- The host is windows 7.
I use to work well with the VM, but after adding a new VM to the same workstation . I start getting the following error when starting my old VM
ramdisk device creation failed sue to insufficient memory.
And on the windows boot manger screen they mentioned to :-
inset my windows installation dis and restart my PC.
click “repair your computer”
but not sure if this will fix the problem , baring in mind that the RAM assigned to the VM
is 24 GB & 80 GB hard disk.
The error is
Link.
so can any one advice what is causing this error?
ThanksYou might start by checking the RAM.
http://windows.microsoft.com/en-US/windows7/Diagnosing-memory-problems-on-your-computer
Regards, Dave Patrick ....
Microsoft Certified Professional
Microsoft MVP [Windows]
Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights. -
Hi,
I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
I encounter this issue in Windows XP, and application gets connection from jboss data source.
url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
username=test
password=test
Error information:
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
I am confused that if I use jdbc, there is no such error.
Connection conn = DriverManager.getConnection("url", "username", "password");
Regards,
NestaI think error 8 is
net helpmsg 8
Not enough storage is available to process this command.
If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
"Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
You can use tools like the free "Process Explorer" to see the used address ranges in your process.
Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses." -
Adobe illustrator cc 2014 insufficient memory was available to complete the operation.
Hi,
Today I saw this error message.
I'm using Illustrator CC 2014 version. "insufficient memory was available to complete the operation."
I opened jpg image. (832kb)
I don't know solution of this error.
Please answer to me.
System Info.
window 8.1K 64bit
16GB Ram
NVIDIA Geforce GTX 870M
HDD 1TBCheck the JPEG file in another program. It's probably damaged or contains some odd metadata that throws AI off.
Mylenium -
Insufficient memory was available to complete the operation
Hello,
I am having trouble using the "place" function to drop a JPEG with a size of approximately 7.6MB into Illustrator CS5. I am creating cross-sections for drill hole planning, and I use the images exported from Geosoft's Target as a base. From there I interpret the geology, precious metal occurences, plan drill holes in Illustrator. I am by no means an expert with Creative Suite, but I know enough to get around. My computer has an i7 processor, 8GB RAM, and 750GB of hard drive space. My intentions are not to be rude, but do not waste my time assuming I am a complete idiot (e.g. I rebooted my machine, rebooted the program, checked to make sure I have plenty of space available, etc...). I worked all night last night without issue, and create all of my cross-sections using this method. I typically never get any issues, but this problem has randomly cropped up twice (the last time it eventually stopped). Unfortunately, I do not have time to magically wait for Illustrator to right itself as I need to present to my Sr. VP on Monday.
Cutting to the chase:
I exported a drill hole fence from Target as a JPEG with a file size of 7.6MB. I then opened up my cross-section template (.ai file) in illustrator, went to File|Place| and accessed the file (I also tried to File|Open). Rather than performing the operation I get the error message "Insufficient memory was available to complete this operation". I then restarted my computer, and attempted this again. My CPU usage is fluttering at 3% +/- 2%, and the Physical Memory is at 31% usage. This seems pretty "buggy" to me, but as I said I am a "beginner user" and perform very basic operations in Illustrator. I have looked around the forums and have seen this issue come up several times, sometimes with very complex advice. I have no plug-ins installed, and have not fiddled with any settings.
Thanks ahead of time for any help provided.
ScotThank you, Mike! That worked for me. Although it doesn't really answer the question I am not too concerned since 99% of the time I have no problem. I guess that is a follow up to the other suggestions as well. I worked from 5PM to midnight last night, and made several end product cross-sections. Then this morning it just decided that it was going to stop working.
For now I am not going to ask too many questions, and will just avoid directly importing the JPEG's from Target. When I get more time perhaps I will try to actually get to the root of the problem as opposed to using a work around. -
Getting Error : Cannot attach data store shared-memory segment,
HI Team,
I am trying to integrate Timesten IMDB in my application.
Machine details
Windows 2003, 32 bit, 4GB RAM.
IMDB DB details
Permanent size 500MB, temp size 40MB.
If I try to connect to database using ttisql it get connected. But If I try to connect in my Java application I get following exception.
java.sql.SQLException: [TimesTen][TimesTen 11.2.1.3.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store shared-memory segment, error 8 -- file "db.c", lineno 7966, procedure "sbDbCreate"
at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3269)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3418)
at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3383)
at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:787)
at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1800)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:207)
Maximum permanent size that works with Java application is 100MB. But it would not be enough for our use.
Could anybody let me know the way to resolve/reason for getting this error? Any response would be appreciated.
Thanks in Advance,
Regards,
atulThis is a very common problem on 32-bit Windows. A TimesTen datastore is a single region of 'shared memory' allocated as a shared mapping from the paging file. In 'direct mode', when the application process(in your case either ttIsql or the JVM) 'connects' to the datastore the datastore memory region is mapped into the process address space. In order for this to happen it is necessary for there to be a free region in the process adddress space that is at least the size of the datastore. This region must be contiguous (i.e. a single region). Unfortunately, the process memory map in 32-bit Windows is typically highly fragmented and the more DLLs that a process uses the worse this is. Also, JVMs typically use a lot of memory, depending on configuration.
Your options to solve this are really limited to:
1. Significantly reduce the memory used by the JVM (may not be possible).
2. Use a local client/server connection from Java instead of a direct mode connection. To minismise the performance overhead make sure you use the optimised ShmIpc connectivity rather than TCP/IP. Even with this there is likely to be a >50% reduction in performance compared to direct mode.
3. Switch to 64-bit Windows, 64-bit TimesTen and 64-bit Java. Even without adding any extra memory to your machine thsi will very likely fix the problem.
Option (3) is by far the best one.
Regards,
Chris -
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
Trying to update my GPS app I get a false message sying I have insufficient memory. I tried deleting the app to reinstall it but seemslike it si trying to charge me again. At any rate, why, with sufficient memory will it never update this app without reinstalling it? (I may be off one OS number)
You will need 3 to 5 times the app size in free space during the app's installation. Once the install is complete, you can reuse that free space. If I have to update a big app like Navigon or TomTom, I remove my music videos or a movie or two. After the app is installed, I can sync again to replace the items I removed.
If you are trying to download the app again from the app store, you have to click the "Buy" button and you will get a message about your credit card being charged, but when you continue, you'll get the opportunity to download the app again without charge. This explains how the priocess works and shows you the screens you will see:
How to redownload purchased apps from the App Store, http://support.apple.com/kb/HT2519
Maybe you are looking for
-
Dear All I have some is my current assignment-Customer is buying some material say "A" want "B,C" as exclusive free goods is it possible to give two free at a time in same order ? I only know one exlusive free good can be give to one material with co
-
How to derive date from given month, year and day?
Hi all, I have a doubt in date function. I need to create a date from the given month, year and day. How can i do it?
-
I have imported an FCP project with chapter markers into iDVD. It looks great and the chapters work, but on the menu, when skipping from scene selection to scene selection with the remote, it jumps around instead of going from top left button to bott
-
Hi How to connect Portal and BW Thanks, Medha.
-
How and where do I enter Apple Care for my iphone 3gs?
Hello, I'm feeling a bit stupid here because I am looking where to enter the Apple Care code for my iphone but I don't know where to enter it... Is it on the Apple website? In itunes? Thanks!! Message was edited by: ghostbusterrrr