Database changed or Memory Unavailable Error in Album
Hi -
I had updated my XZ1 to 4.4.2. In the Album, there is an option Faces (which detects the faces) and asks to add names. So when I try to add name I get the message "Database Changed or Memory unavailable". The other problem I am facing is that my Camera storage was earlier on SD card. I still can see the photos on SD card thru the File Commander. The Album is not showing any of those photos but it shows the album art from my media folder. I am thinking both are connected in the sense that it has the cache of faces somewhere (I cleared data and cache multiple times) but it is not recognising the actual photo database for some reason so it is not able to write the name information (When I select the name option) to the actual photos.
Any suggestions..
Solved!
Go to Solution.
Hi,
I suggest that you try to clear data for album and media storage under settings > apps > all > album/media storage > clear data. Restart the phone and wait for the phone to index all media files again.
What are your thoughts about this forum? Let us know by doing this short survey.
Similar Messages
-
Error While Polling Database changes using Custom SQL
Hi,
I am using Oralce SOA Suite 11g DbAdapter for polling Database changes.
My source database is DB2,I am using custom query for polling the changes in source database.
I am using "Update an External Sequencing Table on a Different Database" polling Strategy
Query section of the Mapping file is as follows.
<querying xsi:type="query-policy">
<queries>
<query name="ReceiveF554102TXEDataSelect" xsi:type="read-all-query">
<call xsi:type="sql-call">
<sql><![CDATA[SELECT FILE_NAME, LIBRARY_NAME,
USER_NAME, SEQUENCE_NUM,
JOURNAL_CODE, ENTRY_TYPE, TIME_STAMP, JOB_NAME, JOB_USER,
JOB_NUMBER, PROGRAM_NAME, ARM_NUMBER, ADDRESS_FAMILY, RMT_ADDRESS,
REMOTE_PORT, SYSTEM_NAME, REL_RECORD, OTRCLN, IBMCU, IBITM, IBLITM,
IBAITM, IBY55ETAFR, IBY55ETATH, IBY55NUM01, IBY55NUM02, IBY55NUM03,
IBY55NUM04, IBY55NUM05, IBY55NUM06, IBY55NUM07, IBY55NUM08,
IBY55NUM09, IBY55NUM10, IBELM01, IBELM02, IBELM03, IBELM04, IBELM05, IBY55STR01,
IBY55STR02, IBY55STR03, IBY55STR04, IBY55STR05, IBY55CHA01,
IBY55CHA02, IBY55CHA03, IBY55CHA04, IBY55CHA05, IBY55DAT01, IBY55DAT02,
IBY55DAT03, IBY55DAT04, IBY55DAT05, IBUSER, IBPID, IBJOBN, IBUPMT,
IBUPMJ FROM PY_JRNMON.F554102T WHERE ((TIME_STAMP > (SELECT LAST_READ_DATE FROM SOALIB.SOASEQHDR WHERE (TABLE_NAME = 'F554102T'))) AND (TIME_STAMP < SYSDATE)) ORDER BY TIME_STAMP ASC]]> </sql>
</call>
<reference-class>ReceiveF554102TXEData.F554102T</reference-class>
<lock-mode>none</lock-mode>
<container xsi:type="list-container-policy">
<collection-type>java.util.Vector</collection-type>
</container>
</query>
</queries>
<delete-query xsi:type="delete-object-query">
<call xsi:type="sql-call">
<sql>UPDATE SOALIB.SOASEQHDR SET LAST_READ_DATE = #LAST_READ_DATE WHERE (TABLE_NAME = 'F554102T')</sql>
</call>
</delete-query>
</querying>
I am getting following Error after defining the custom SQL in my mapping.xml file. (After executing the Created Process)
Caused by: BINDING.JCA-11622
Could not create/access the TopLink Session.
This session is used to connect to the datastore.
Caused by BINDING.JCA-11626
Query Not Found Exception.
Could not find Named Query [ReceiveSampleCustomPollSelect] with ? arguments, belonging to Descriptor [ReceiveSampleCustomPoll.F554102T] for [oracle.tip.adapter.db.DBActivationSpec@2ec733b0] inside of mappings xml file: [ReceiveSampleCustomPoll-or-mappings.xml].
You may have changed the queryName info in either the _db.jca or toplink-mappings.xml files so that they no longer match.
Make sure that the queryName in the activation/interactionSpec and in the Mappings.xml file match. If an old version of the descriptor has been loaded by the database adapter, you may need to bounce the app server. If the same descriptor is described in two separate Mappings.xml files, make sure both versions include this named query.
You may need to configure the connection settings in the deployment descriptor (i.e. DbAdapter.rar#META-INF/weblogic-ra.xml) and restart the server. This exception is considered not retriable, likely due to a modelling mistake. This polling process will shut down, unless the fault is related to processing a particular row, in which case polling will continue but the row will be rejected (faulted).
at oracle.tip.adapter.db.exceptions.DBResourceException.createNonRetriableException(DBResourceException.java:653)
at oracle.tip.adapter.db.exceptions.DBResourceException.createEISException(DBResourceException.java:619)
at oracle.tip.adapter.db.exceptions.DBResourceException.couldNotCreateTopLinkSessionException(DBResourceException.java:291)
at oracle.tip.adapter.db.DBManagedConnectionFactory.acquireSession(DBManagedConnectionFactory.java:883)
at oracle.tip.adapter.db.transaction.DBTransaction.getSession(DBTransaction.java:375)
at oracle.tip.adapter.db.DBConnection.getSession(DBConnection.java:266)
at oracle.tip.adapter.db.InboundWork.init(InboundWork.java:322)
at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:526)
... 4 more
Caused by: BINDING.JCA-11626
Query Not Found Exception.
Could not find Named Query [ReceiveSampleCustomPollSelect] with ? arguments, belonging to Descriptor [ReceiveSampleCustomPoll.F554102T] for [oracle.tip.adapter.db.DBActivationSpec@2ec733b0] inside of mappings xml file: [ReceiveSampleCustomPoll-or-mappings.xml].
You may have changed the queryName info in either the _db.jca or toplink-mappings.xml files so that they no longer match.
Make sure that the queryName in the activation/interactionSpec and in the Mappings.xml file match. If an old version of the descriptor has been loaded by the database adapter, you may need to bounce the app server. If the same descriptor is described in two separate Mappings.xml files, make sure both versions include this named query.
at oracle.tip.adapter.db.exceptions.DBResourceException.queryNotFoundException(DBResourceException.java:694)
at oracle.tip.adapter.db.ox.TopLinkXMLProjectInitializer.initializeQuery(TopLinkXMLProjectInitializer.java:1238)
at oracle.tip.adapter.db.DBManagedConnectionFactory.acquireSession(DBManagedConnectionFactory.java:728)
... 8 more
Help required to fit custom SQL in mapping.xml file for complex queries.
Thanks,
Arun Jadhav."Execuite Custom SQL" in DB adapter and you'l get it.
But in this case you'll have to implement your own polling strategy (if you need one).
If you want to use one of the predefined polling strategy you should use "Poll for New or Changed records" and import all the tables you use and connect them in wizard. -
"Memory Full" error when changing page setup
I'm running Crystal Reports 2008 and yesterday I had two landscape reports begin doing some weird things:
Symptoms:
-Any change to the Page Setup dialog throws the Memory Full error (even just bringing it up and clicking ok)
-The reports were created as landscape but are displaying as portrait in Page Setup and in design view
-A handful of fields are displayed without text in them. If I delete these fields, save, and re-open other fields will not display information (in design view)
Troubleshooting:
-If I remove ALL fields from the reports, save, and re-open the report tells me I have an invalid printer but will allow me to change Page Setup.
*While this works, I really do not want to create the reports again from scratch.
-I have tried removing and re-installing Crystal 2008 with no luck. However, it remembered my recently opened reports which makes me think the uninstall didn't remove everything.
-I have rebooted to make sure all memory available is free.
Does anyone have troubleshooting ideas for me? Thanks in advance.The link takes me to a page saying "The system cannot find the file specified. " I also tried downloading the .exe version and got the same result.
I was able to find an SP3 download but after unpacking the files and selecting my language I get a message "Crystal Reports 2008 SP3 Update can not install because the version of the product on the system is too low."
I downloaded the SP3 file from here: http://wiki.sdn.sap.com/wiki/display/BOBJ/CrystalReports2008-VersionandDownloadinformationforSPsand+FPs
I also tried this one with the same effect: Crystal Reports 2008 Reference [original link is broken]
I assume I downloaded an update as opposed to the "full version", is there another full version available somewhere? -
I can’t believe the amount of concern / disappointment / frustration spread across every associated Fireworks forum re: CS5’s
"An internal error occurred"
"Could not render the database"
"Not enough memory"
"Crash without notification"
Etc.,
We installed the CS5 trial – being more than wary about Adobe’s past releases – and lo and behold, all of the above beared ‘true’!
This is nothing new of course, we’ve all been experiencing this since CS3 – but hoping against hope - a newer release would solve the poor memory management and general ‘bug-ridden’ code; alas - as per usual, Adobe has not responded with any pro-solution based action, but successfully furthered our frustration with a couple of fresh gimmicks without strengthening the core software.
As an avid fan of adobe software […and a Fireworks freak] working in a design house that has many different employees with widespread software tastes – I eventually said ‘enough is enough’ after the umpteenth crash [as of 3 weeks ago] and have revisited the Rebel Alliance; ‘yes, CorelDraw’! No I’m not going to go into some tirade about how much better Corel is etc. - as it has its own strengths and weaknesses, too […but without the hourly crashes] – so to be honest, we figure the time increase in some projects due to using Corel […time is diminishing with each project’s acquired experience] are negated by the downtime of Fireworks; so far, this is holding true.
This may seem drastic, but it has been a long time coming – that is, implementing a move from our decade invested workflow to a ‘somewhat’ new schema, but due to the disappointing aforementioned, eventually principal / expenditure comparisons / sanity / lack of support all culminate to such, and if there’s no support for the competition, well – then there is ‘no’ competition; a luxury Adobe has taken for granted way too long.
I truly hope Adobe turns around and fixes their ways, until then – we’ll be supporting those that do – and hopefully along the way, just maybe, with the added funds from disgruntled adobe x-pats – the software will far exceed what I used to love and adore […how I miss macromedia]; honestly, it would be near on impossible to argue which suite was better ‘either way’ – so it may not be such a distant future. Besides, with the market-door Adobe is opening due to such poor software, the new player sniffing around the edges will be welcomed by many with open arms; I know my/our allegiance will go straight to the company with the greatest software stability and sound support, whomever that maybe.
So here’s to hoping no more, and actually doing! Very sad...I am also getting this error message. It happens randomly. Adobe help told me to close and reopen FW and then it stopped happening. Does this sound like a good solution to anyone?
-
Here is a ticket regarding our current client web application ( Image data add, edit , delete in folder with form data in MSSQL Database) that using code c#, web form, ajax, VS2008, MSSQL Server2008 , it appears that there is an error where the HTTP
503 error occurs.
. Below is a conversation with Host Server support assistant.Can you take a look at it?
Ben (support) - Hi
Customer - We're having an issue with our windows host
Ben (support) - What's the issue?
Customer - 503 errors
Ben (support) - I am not getting any 503 errors on your site, is there a specific url to duplicate the error?
Customer - no, it comes and goes without any change Customer - could you have access to any logs ?
Ben (support) - Error logs are only available on Linux shared hosting, however with this error it may be related to you reaching your concurrent connections
Ben (support) - You can review more about this at the link \
Customer - probably yes - how can we troubleshoot ?
Ben (support) - http://support.godaddy.com/help/article/3206/how-many-visitors-can-view-my-site-at-once
Ben (support) - This is something you need to review your code and databases to make sure they are closing the connections in a timely manner
Customer - we're low traffic, this is an image DB to show our product details to our customers
Customer - ahhhh, so we could have straying sessions ?
Ben (support) - Correct Customer - any way you could check if it's the case ?
Customer - because it was working previously
Ben (support) - We already know that's the case as you stated the 503 errors don't happen all the time if it were issue on the server the the 503 would stay.
Customer - so our 2/3 max concurrent users can max out the 200 sessions
Customer - correct ?
Customer - is there a timeout ?
Ben (support) - no that's not a time out concurrent connections are a little different then sessions and or connections. Lets say for an example you have 5 images on your site and 5 7 users come to your site this is not 7 concurrent connections but 35. They
do close after awhile hence why the 503 error comes and goes. You can have these connections close sooner using code but this is something you have to research using your favorite search engine
Customer - thank you so much
Customer - I'm surprised that this just started a few weeks ago when we haven't changed anything for months
Customer - any changes from your side ? lowering of the value maybe ?
Customer - I'm trying to understand what I can report as a significant change
Ben (support) - We haven't touched that limit in years
Ben (support) - This could just be more users to your site than normal or even more images
Customer - I was thinking that could be it indeed
Customer - so I need to research how to quickly close connections when not needed
Ben (support) - Correctly
Ben (support) - correct
Customer - thanks !!
Ben (support) - Your welcome
Analysis :
The link provided tells us : All Plesk accounts are limited to 200 simultaneous visitors.
From what Ben (support) says and a little extra research, if those aren't visitors but connections then it's quite easy to max out, especially if the connections aren't closed when finished using. I'd suggest forwarding this to Kasem to see what he thinks.
Cheers,
CustomerHi Md,
Thank you for posting in the MSDN forum.
>>
I want to writte C# code for 503 Service Unavailable error to web application page immediate close connection any page loaded.
Since
Visual Studio General Forum which discuss VS IDE issue, I am afraid that you post the issue in an incorrect forum.
To help you find the correct forum, would you mind letting us know more information about this issue? Which kind of web app you develop using C# language? Is it an ASP.NET Web Application?
If yes, I suggest you could post the issue directly on
ASP.NET forum, it would better support your issue.
Thanks for your understanding.
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click HERE to participate the survey. -
Cloning of database to same host give error, unable to re-create online log
clone the database to same host
oracle version: 9.2.0.5
os HP
target db: tardb
catlog catlog
auxiliary: auxbr
After running this script, I am getting error unable to re-create online log.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 06/08/2009 23:17:38
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-00344: unable to re-create online log '/db/app/oracle/product/9.2.0.5/dbs/ /db
/redolog.001/catalog1/catalog1_log1.rdo'
ORA-27040: skgfrcre: create error, unable to create file
HP-UX Error: 2: No such file or directory
I tried to shutdown, and mount, recover with backup contorlfile until cancel. and open with reset logs ,
I am not able to make the copy of the database on same host.
Help is appreciated.
complete error log
oracle@dimondz{auxbr}/db/app/oracle/dba/sql> sqlplus '/as sysdba'
SQL*Plus: Release 9.2.0.5.0 - Production on Mon Jun 8 23:11:23 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/db/app/oracle/admin/catlog/pfile/initauxbr.ora';
ORACLE instance started.
Total System Global Area 219115512 bytes
Fixed Size 737272 bytes
Variable Size 83886080 bytes
Database Buffers 134217728 bytes
Redo Buffers 274432 bytes
SQL> create spfile from pfile='/db/app/oracle/admin/catlog/pfile/initauxbr.ora';
File created.
SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.5.0 - Production
oracle@dimondz{auxbr}/db/app/oracle/dba/sql>
oracle@dimondz{auxbr}/db/app/oracle/dba/sql> rman
Recovery Manager: Release 9.2.0.5.0 - 64bit Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> connect catalog rman/rman@catlog
connected to recovery catalog database
RMAN> connect target sys/dimondz@tardb
connected to target database: tardb (DBID=3063303886)
RMAN> connect auxiliary sys/dimondz@auxbr
connected to auxiliary database: auxbr (not mounted)
RMAN>
RMAN> RUN
2> {
3> SET NEWNAME FOR DATAFILE 1 TO '/db/catalog1.001/oradata/system01.dbf';
4>
5> SET NEWNAME FOR DATAFILE 2 TO '/db/catalog1.001/oradata/undotbs01.dbf';
6>
SET NEWNAME FOR DATAFILE 3 TO '/db/catalog1.001/oradata/ptest01.dbf';
7> 8>
9> SET NEWNAME FOR DATAFILE 4 TO '/db/catalog1.001/oradata/users_01.dbf';
10>
11> SET NEWNAME FOR DATAFILE 5 TO '/db/catalog1.001/oradata/drsys_01.dbf';
12>
13> SET NEWNAME FOR DATAFILE 6 TO '/db/catalog1.001/oradata/qms_dat_01.dbf';
14>
15> SET NEWNAME FOR DATAFILE 7 TO '/db/catalog1.001/oradata/ultradat_01.dbf';
16>
17> SET NEWNAME FOR DATAFILE 11 TO '/db/catalog1.001/oradata/xmltbs_01.dbf';
18>
19> DUPLICATE TARGET DATABASE TO auxbr
20>
21> pfile=/db/app/oracle/admin/catlog/pfile/initauxbr.ora
22> logfile
23> ' /db/redolog.001/catalog1/catalog1_log1.rdo' size 5m,
24> ' /db/redolog.003/catalog1/catalog1_log2.rdo' size 5m,
25> ' /db/redolog.002/catalog1/catalog1_log3.rdo' size 5m;
26> }
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting Duplicate Db at 08-JUN-09
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=11 devtype=DISK
printing stored script: Memory Script
set until scn 229907626;
set newname for datafile 1 to
"/db/catalog1.001/oradata/system01.dbf";
set newname for datafile 2 to
"/db/catalog1.001/oradata/undotbs01.dbf";
set newname for datafile 3 to
"/db/catalog1.001/oradata/ptest01.dbf";
set newname for datafile 4 to
"/db/catalog1.001/oradata/users_01.dbf";
set newname for datafile 5 to
"/db/catalog1.001/oradata/drsys_01.dbf";
set newname for datafile 6 to
"/db/catalog1.001/oradata/qms_dat_01.dbf";
set newname for datafile 7 to
"/db/catalog1.001/oradata/ultradat_01.dbf";
set newname for datafile 11 to
"/db/catalog1.001/oradata/xmltbs_01.dbf";
restore
check readonly
clone database
executing script: Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 08-JUN-09
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /db/catalog1.001/oradata/system01.dbf
restoring datafile 00002 to /db/catalog1.001/oradata/undotbs01.dbf
restoring datafile 00003 to /db/catalog1.001/oradata/ptest01.dbf
restoring datafile 00004 to /db/catalog1.001/oradata/users_01.dbf
restoring datafile 00005 to /db/catalog1.001/oradata/drsys_01.dbf
restoring datafile 00006 to /db/catalog1.001/oradata/qms_dat_01.dbf
restoring datafile 00007 to /db/catalog1.001/oradata/ultradat_01.dbf
restoring datafile 00011 to /db/catalog1.001/oradata/xmltbs_01.dbf
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/dump/DBA/RMAN/tardb_bkup/db_tardb_f_04kgt77h tag=WHOLE_DATABASE_tardb params=NULL
channel ORA_AUX_DISK_1: restore complete
Finished restore at 08-JUN-09
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "auxbr" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 907
LOGFILE
GROUP 1 ' /db/redolog.001/catalog1/catalog1_log1.rdo' SIZE 5242880 ,
GROUP 2 ' /db/redolog.003/catalog1/catalog1_log2.rdo' SIZE 5242880 ,
GROUP 3 ' /db/redolog.002/catalog1/catalog1_log3.rdo' SIZE 5242880
DATAFILE
'/db/catalog1.001/oradata/system01.dbf'
CHARACTER SET UTF8
printing stored script: Memory Script
switch clone datafile all;
executing script: Memory Script
datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=689037446 filename=/db/catalog1.001/oradata/undotbs01.dbf
datafile 3 switched to datafile copy
input datafilecopy recid=2 stamp=689037446 filename=/db/catalog1.001/oradata/ptest01.dbf
datafile 4 switched to datafile copy
input datafilecopy recid=3 stamp=689037446 filename=/db/catalog1.001/oradata/users_01.dbf
datafile 5 switched to datafile copy
input datafilecopy recid=4 stamp=689037446 filename=/db/catalog1.001/oradata/drsys_01.dbf
datafile 6 switched to datafile copy
input datafilecopy recid=5 stamp=689037446 filename=/db/catalog1.001/oradata/qms_dat_01.dbf
datafile 7 switched to datafile copy
input datafilecopy recid=6 stamp=689037446 filename=/db/catalog1.001/oradata/ultradat_01.dbf
datafile 11 switched to datafile copy
input datafilecopy recid=7 stamp=689037446 filename=/db/catalog1.001/oradata/xmltbs_01.dbf
printing stored script: Memory Script
set until scn 229907626;
recover
clone database
delete archivelog
executing script: Memory Script
executing command: SET until clause
Starting recover at 08-JUN-09
using channel ORA_AUX_DISK_1
starting media recovery
channel ORA_AUX_DISK_1: starting archive log restore to default destination
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=3
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=4
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/dump/DBA/RMAN/tardb_bkup/db_arch_tardb_f_06kgt7ih tag=ARCHIVE_LOG_tardb_BACKUP params=NULL
channel ORA_AUX_DISK_1: restore complete
archive log filename=/oradump/oradata/catlog/arch/catlog-1231663410_1_3.arc thread=1 sequence=3
channel clone_default: deleting archive log(s)
archive log filename=/oradump/oradata/catlog/arch/catlog-1231663410_1_3.arc recid=1 stamp=689037447
archive log filename=/oradump/oradata/catlog/arch/catlog-1231663410_1_4.arc thread=1 sequence=4
channel clone_default: deleting archive log(s)
archive log filename=/oradump/oradata/catlog/arch/catlog-1231663410_1_4.arc recid=2 stamp=689037447
media recovery complete
Finished recover at 08-JUN-09
printing stored script: Memory Script
shutdown clone;
startup clone nomount pfile= '/db/app/oracle/admin/catlog/pfile/initauxbr.ora';
executing script: Memory Script
database dismounted
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 219115512 bytes
Fixed Size 737272 bytes
Variable Size 83886080 bytes
Database Buffers 134217728 bytes
Redo Buffers 274432 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "auxbr" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 907
LOGFILE
GROUP 1 ' /db/redolog.001/catalog1/catalog1_log1.rdo' SIZE 5242880 ,
GROUP 2 ' /db/redolog.003/catalog1/catalog1_log2.rdo' SIZE 5242880 ,
GROUP 3 ' /db/redolog.002/catalog1/catalog1_log3.rdo' SIZE 5242880
DATAFILE
'/db/catalog1.001/oradata/system01.dbf'
CHARACTER SET UTF8
printing stored script: Memory Script
catalog clone datafilecopy "/db/catalog1.001/oradata/undotbs01.dbf";
catalog clone datafilecopy "/db/catalog1.001/oradata/ptest01.dbf";
catalog clone datafilecopy "/db/catalog1.001/oradata/users_01.dbf";
catalog clone datafilecopy "/db/catalog1.001/oradata/drsys_01.dbf";
catalog clone datafilecopy "/db/catalog1.001/oradata/qms_dat_01.dbf";
catalog clone datafilecopy "/db/catalog1.001/oradata/ultradat_01.dbf";
catalog clone datafilecopy "/db/catalog1.001/oradata/xmltbs_01.dbf";
switch clone datafile all;
executing script: Memory Script
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/undotbs01.dbf recid=1 stamp=689037456
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/ptest01.dbf recid=2 stamp=689037456
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/users_01.dbf recid=3 stamp=689037456
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/drsys_01.dbf recid=4 stamp=689037457
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/qms_dat_01.dbf recid=5 stamp=689037457
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/ultradat_01.dbf recid=6 stamp=689037457
cataloged datafile copy
datafile copy filename=/db/catalog1.001/oradata/xmltbs_01.dbf recid=7 stamp=689037457
datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=689037456 filename=/db/catalog1.001/oradata/undotbs01.dbf
datafile 3 switched to datafile copy
input datafilecopy recid=2 stamp=689037456 filename=/db/catalog1.001/oradata/ptest01.dbf
datafile 4 switched to datafile copy
input datafilecopy recid=3 stamp=689037456 filename=/db/catalog1.001/oradata/users_01.dbf
datafile 5 switched to datafile copy
input datafilecopy recid=4 stamp=689037457 filename=/db/catalog1.001/oradata/drsys_01.dbf
datafile 6 switched to datafile copy
input datafilecopy recid=5 stamp=689037457 filename=/db/catalog1.001/oradata/qms_dat_01.dbf
datafile 7 switched to datafile copy
input datafilecopy recid=6 stamp=689037457 filename=/db/catalog1.001/oradata/ultradat_01.dbf
datafile 11 switched to datafile copy
input datafilecopy recid=7 stamp=689037457 filename=/db/catalog1.001/oradata/xmltbs_01.dbf
printing stored script: Memory Script
Alter clone database open resetlogs;
executing script: Memory Script
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 06/08/2009 23:17:38
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-00344: unable to re-create online log '/db/app/oracle/product/9.2.0.5/dbs/ /db
/redolog.001/catalog1/catalog1_log1.rdo'
ORA-27040: skgfrcre: create error, unable to create file
HP-UX Error: 2: No such file or directory
RMAN>
I tried to shutdown and open with resetlogs , it still error
oracle@dimondz{auxbr}/db/app/oracle/dba/sql> sqlplus '/as sysdba'
SQL*Plus: Release 9.2.0.5.0 - Production on Mon Jun 8 23:27:05 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.5.0 - Production
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 219115512 bytes
Fixed Size 737272 bytes
Variable Size 83886080 bytes
Database Buffers 134217728 bytes
Redo Buffers 274432 bytes
Database mounted.
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 229907626 generated at 06/06/2009 11:58:09 needed for thread
1
ORA-00289: suggestion : /oradump/oradata/catlog/arch/catlog-1814470384_1_5.arc
ORA-00280: change 229907626 for thread 1 is in sequence #5
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> alter database open resetlogs;
alter database open resetlogs
ERROR at line 1:
ORA-00344: unable to re-create online log '/db/app/oracle/product/9.2.0.5/dbs/
/db/redolog.001/catalog1/catalog1_log1.rdo'
ORA-27040: skgfrcre: create error, unable to create file
HP-UX Error: 2: No such file or directory
SQL>
Thanks, Your help and guide will appreciated very much.Hello,
The directory for the creation of the logfiles does not exist, from the error:
RMAN-06136: ORACLE error from auxiliary database: ORA-00344: unable to re-create online log '/db/app/oracle/product/9.2.0.5/dbs/ /db
/redolog.001/catalog1/catalog1_log1.rdo'
ORA-27040: skgfrcre: create error, unable to create file
HP-UX Error: 2: No such file or directoryI.e., there is no such directory as *'/db/app/oracle/product/9.2.0.5/dbs/ /db/redolog.001/catalog1/catalog1_log1.rdo'*
Your db_recovery_file_dest parameter may be set here, therefore change this:
'/db/redolog.001/catalog1/catalog1_log1.rdo' size 5m,
' /db/redolog.003/catalog1/catalog1_log2.rdo' size 5m,
' /db/redolog.002/catalog1/catalog1_log3.rdo' size 5m;To this:
'catalog1_log1.rdo' size 5m,
'catalog1_log2.rdo' size 5m,
'catalog1_log3.rdo' size 5m;And try again. -
Xserve G5 - Service Diagnostic Memory Access Errors
Hello,
I am trying to trouble-shoot a 2ghz G5 Xserve. The logicboard was replaced on this server, because the ethernet ports failed. After replacing the logicboard, I attempted to run the Service Diagnostic CD, to run the thermal calibration (to get the blowers to act normally - they blow constantly). Each and every time I boot from this CD, I get a memory access error.
http://img.photobucket.com/albums/v30/bbplayer5/IT/ASD.jpg
http://img.photobucket.com/albums/v30/bbplayer5/IT/ASD2.jpg
I have replaced:
- Logic Board
- Memory
- Processor
Any help would be appreciated. Thank you!The GSX images section for xserves, does not have a recent copy of the ASD software. I just talked to someone at Apple, and they claimed there was no ASD for it, I had to use remote server admin?! But that makes no sense... And, I cannot find anything in remote admin that indicates thermal calibration.
If you change the processor on a G5 Xserve, you have to run thermal calibration correct? The fans will not stop regarldess of what I do.
Quite frustrating. -
Database Control always gets "The database status is currently unavailable"
I have two 10.1.0 databases installed on a Windows XP machine. Instance orcl always gets this message. Instance orcltwo does not. If I shut down orcl and then start the web database control page I can select the startup button and it starts up the database and then goes back to the pages that says that the database is not available. The console for that database is running with no errors that I can find. The DBSNMP account is not locked. The SYSMAN account is not locked. There are no errors in any log file that I can find. What am I doing wrong here. The orcltwo instance has no problems at all.
Just to share this experience with everyone. This happened with Grid Control (not sure if it applies to Database Control).
All the targets in EM (except the Agent) showed Down or Unavailable for two weeks. This is how I checked it.
Note that if the Agent is not able to upload Metrics to the repository due to one reason or the other, you will not be able to see the current status.
Assuming your ORACLE_HOME is different from your AGENT_HOME. Check the emagent trace file in the Agent home.
e.g with AGENT home as D:\oracle\agent\
check D:\oracle\agent\sysman\log\emagent.trc to see the latest error message.
Then check the upload directory to see if there are xml files waiting to be uploaded
e.g D:\oracle\agent\sysman\emd\upload
This directory should be clear of .xml file if your metrics loading is working.
From the Agents Monitoring screen in Enterprice Manager (Management System -> Agent), under the Upload session where there is Upload Metric Data button, check for "Last Successful Upload" and "Data Pending Upload " and any error messages. If the Metric upload is not working, the Data Pending uplaod will be more than 0.00MB and the error message link will show you the xml file it is unable to load. Which you will find in my example D:\oracle\agent\sysman\emd\upload
I my example upload file B0002332.xml is the one I found on the error url and when I viewed it in the directory, the structure is different from others. This inability to load it caused all other upload files to be queuing. There was a line in it as
<METRIC_GUID><![CDATA[=3F6739EE03C21CE7CA5E32FD9185]]></METRIC_GUID>
Looking at the structure of other files, I took a guess (not the best , this is a test EM) and changed it to
<METRIC_GUID>80C23F6739EE03C21CE7CA5E32FD9185</METRIC_GUID>
Then all the two weeks 32MB of xml files loaded automatically within seconds. Logoff EM and Login, everything showed the correct status.
Hope this helps. -
The database status is currently unavailable.
Hi,
I have a probem in my production database -
Server - Windows 2003
DB - Oracle 10g
When i try to open Enterprise Manager I get the below error message. This was working fine since last month. But I haven't made any changes to this DB. But I noticed that disk space was very less so I deleted files from recycle bin, temp internet files, cookies and some unwanted documents i copied on the server. No disc space is OK. I have 10 GB free space. Also i am getting DB warning notification because table space is 92 % full so I want to increase it. But I dont know why I am getting this because I had already set to grow table space automatically.
Is there any other way to increase table space other than that in Enter prise manager ?
Message:*
*"The database status is currently unavailable. It is possible that the database is in mount or nomount state...."*
I tried to troubleshoot as per some forums but it doesnt work. If i use command prompt or sql plus to connect to DB it works but only enterprise manager has this issue. Also my Documentum Application connected to this DB works fine.
listner is fine. Also I added server login account in 'log on as batch john' in local policies.
I do not have a back up to restore also.
Regards,
Ranjith JohnØyvind Isene wrote:
Regarding the tablespace you may consider extend it yourself and not wait for the autoextend to kick in. This way you avoid future problems if for some strange reason the datafile cannot be extended. Not ignoring this error and being reminded every time the tablespace reaches some limit gives you a feeling of how fast your data grows, which is useful (if the alert comes too often you have to increase with bigger chunks). Autoextend is a feature invented for the lazy dbas out there (imho), a responsible one should not rely on it for production systems with any serious load.
As for the connection problem from EM, I suspect the agent has a connection problem. Have you verified the username and password it is using? Can you connect to the db with the same combination?Yes, I can connect to DB with the same user name and password using command prompt or sql plus. Also my Documentum Application is connecting fine to this oracle DB.
This is the full message.
*"The database status is currently unavailable. It is possible that the database is in mount or nomount state. Click 'Startup' to obtain the current status and open the database. If the database cannot be opened, click 'Perform Recovery' to perform an appropriate recovery operation.":*
Previously I used to get login page but now instead of login page I get this warning. There is also 'Startup' and 'Perform Recovery' button. Even if I use 'startup' button it does not resolve issue. It gives me the bellow message -
SQLException
ORA-28000: the account is locked
Startup/Shutdown:Confirmation
Current Status open
Operation shutdown immediate
Are you sure you want to perform this operation?
I used this to unlock my oracle user yesterday
SQL> alter user username identified by password account unlock;
Regards,
Ranjith john -
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
Using Database Change Notification instead of After Insert Trigger
Hello guys! I have an after insert trigger that calls a procedure, which in turn is doing an update or insert on another table. Due to mutating table errors I declared the trigger and procedure as autonomously transactional. The problem is, that old values of my main tables are inserted into the subtable since the after insert/update trigger is fired before the commit.
My question is how can I solve that and how could I use the change notification package to call my procedure? I now that this notification is only started after a DML/DDL action has been commited on a table.
If you could show me how to carry out the following code with a Database Change Notification I'd be delighted. Furthermore I need to know if it suffices to set up this notification only once or for each client seperately?
Many thanks for your help and expertise!
Regards,
Sebastian
declare
cnumber number (6);
begin
select count(*) into cnumber from (
select case when (select date_datum
from
(select f.date_datum,
row_number() over (order by f.objectid desc) rn
from borki.fangzahlen f
where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010'
where rn=1) < (select date_datum
from
(select f.date_datum,
row_number() over (order by f.objectid desc) rn
from borki.fangzahlen f
where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010'
where rn=2) then 1 end as action from borki.fangzahlen
where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010') where action = 1;
if cnumber != 0 then
delete from borki.tbl_test where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010';
commit;
pr_fangzahlen_tw_sync_sk(:new.lng_falle, :new.int_fallennummer, :new.lng_schaedling);It looks like you have an error in line 37 of your code. Once you fix that the problem should be resolved.
-
NMI: Parity Check / Memory Parity Errors are occurring on
five out of five Lenovo T500 laptops 2082-58M tested.
I can get the error to re-occur consistently by installing
IBM Director Client version 5.20 or 5.23, taking remote control and then within
the remote session I initiate a PC restart. This re-produces these errors
without fail.
I've also received this same error without Director but am
unable to cause this to re-occur consistently.
My Findings:
Changing the BIOS Settings from the default settings of :
BIOS Config > Display
Default Primary Video Device:[Internal]
Boot Display Device:[ThinPad LCD]
Graphics Device:[Discrete Graphics]
OS Detection for Switchable Graphics:[Enabled]
To the following settings resolves the issue though this
causes the installed and previously working ATI driver not to detect the video
device.
BIOS Config > Display
Default Primary Video Device:[Internal]
Boot Display Device:[ThinPad LCD]
Graphics Device:[Integrated Graphics]
OS Detection for Switchable Graphics:[Disabled]
What I have tried:
o Updated the BIOS and Drivers to Latest available
o Re-Installed the Display Driver, Downgraded the
Display Driver
o Removed the Display Driver and left BIOS
settings enabled (NMI errors re-occur)
To get to the above findings I tried removing /
re-installing pretty much every driver and disabling every option within the
BIOS. I am only listing what I believe is now relevant to the identified
problem. I can provide further information upon request.
Anyone able to help ?Hello,
the switchable gfx doesn´t work in XP. In XP you have to choose gfx options in bios before booting os.
Please try:
Default Primary Video Device:[Internal]
Boot Display Device:[ThinPad LCD]
Graphics Device:[discrete Graphics]
OS Detection for Switchable Graphics:[Disabled]
Follow @LenovoForums on Twitter! Try the forum search, before first posting: Forum Search Option
Please insert your type, model (not S/N) number and used OS in your posts.
I´m a volunteer here using New X1 Carbon, ThinkPad Yoga, Yoga 11s, Yoga 13, T430s,T510, X220t, IdeaCentre B540.
TIP: If your computer runs satisfactorily now, it may not be necessary to update the system.
English Community Deutsche Community Comunidad en Español -
Hi,
We recently introduced a Exchange 2010 server into our Exchange 2003 enviroment but had to stop migrating users over to it after it became appartent that users using Outlook 2010 could not switch on their Out Of Office, getting an "Your automatic reply settings
cannot be displayed because the server is currently unavailable" error. However users on the new 2010 can set OOF if they use OWA or Outlook 2003!
Running the Test E-mail AutoConfiguration shows no errors with the Log tab showing Autodiscover succeeded and the Results tab showing all the correct values, including a OOF URL of
/Autodiscover/Autodiscover.xml">https://<servername>/Autodiscover/Autodiscover.xml into IE gives a XML file with a 600 error code.
I have looked at loads of 'solutions' on the Internet but cannot get this working, anyone have any suggestions?
Thank you,
CraigPlease verify IIS
settings:
======================
On Exchange CAS
server, open IIS console,
Go to default web site, double click “SSL
Settings”, make sure Client Certificates is set to “Ignore”
Click Autodiscover, double click “SSL
Settings”, make sure Client Certificates is set to “Ignore”
Click EWS, double click “SSL
Settings”, make sure Client Certificates is set to “Ignore”
Click OAB, double click “SSL
Settings”, make sure Client Certificates is set to “Ignore”
Please run IISRESET command to load the changes.
Thanks.
Tony Chen
TechNet Community Support -
How can I prevent a memory flow error from occuring when using 3D contour plots?
After displaying a single contour plot a number of times, LabVIEW crashes and reports a memory overflow error. I only load a single 2D array, never storing previouls ones. I always index the data for contour plot 0, and don't explicitly store multiple 2D arrays in any buffer, such as appending data to a shift register. There appears to be some sort of memory leak in LV when using this feature. Is there a programatic way to flush the stored data in a contour plot prior to displaying the next 2D array of data?
Hello MicTomReiSr,
Is this in the LabVIEW development environment? Are you making changes to the code and then running the VI? Does the plot contain default data? Is the indicator cleared when the program stops? The Plot Clean Data method may be what you're looking for.
Remember that the development environment maintains an undo history of the changes you've made, and copies both the block diagram and FP contents each time. If you have a lot of data displayed on the front panel you end up copying it multiple times. This isn't a problem unless you're actively editing the VI in question.
If you do need to edit and run the VI at the same time (perfectly reasonable, although you might want to consider a 64-bit installation or more RAM if you commonly work with resource-intensive data types like 3D plots) try reducing the Undo history length (Tools>>Options), although be aware that you won't be able to back up as many steps.
Regards,
Tom L.
Maybe you are looking for
-
How to stop browser and mail applications opening automatically at start up
Since upgrading to Lion, Firefox, Mail and Hogwasher (a newsreader application) all open automatically when the computer starts up. I don't want or need this to happen but cannot find any way of preventing it. I have checked my user login items, but
-
CardDav in Contacs not working correctly
The implementation of groups in the CardDav-protocol is not correct. I have a CardDav-service from my ISP. It works correctly on iOS, but not in OS X. The problem is that the moment I create a group, only the contacts in a group are displayed. That i
-
where did my pics go. Transferred 40-50- pic from camera memory card to Mac via slot; lost pics on memory card and unable to find anywhere in Mac. May have gone to Open Office as Data . Is that possible and if so can I get pics back or if not so ,wh
-
How to make Classfication entry pop up at Inbound delivery VL31N?
Dear All I need to have the characteristics values of the class assigned to my material POP UP during the Batch Creation in Inbound Delivery (VL31N) I have activated the necessary config and the batch number is able to generated successfully. However
-
How do I uninstall MobilMe????
How do I uninstall MobilMe. I don't remember being asked to install it. I don't use it. I have not intention of using it. It has an icon that is in my task bar and probably using memory. Regardless I want it uninstalled. Any help would be appreciated