Unable to open the database after it has been mounted. XE 10G

I am on Windows XP running Oracle 10G XE Edition.
After running a defrag & cleanup process, I have not been able to access any of the objects on the database.
A quick check
set lines110
col strtd hea 'STARTED'
col instance_name for a8 hea 'INSTANCE'
col host_name for a15 hea 'HOSTNAME'
col version for a10
select instance_name, version, host_name, status
, database_status, to_char(startup_time,'DD-MON-YYYY HH:MI:SS') strtd
from v$instance;
returns this
INSTANCE VERSION    HOSTNAME        STATUS       DATABASE_STATUS   STARTED
xe       10.2.0.1.0 DT8775C    MOUNTED      ACTIVE            03-DEC-2010 11:38:00If I use this command, it throws the following error.
SQL> ALTER DATABASE OPEN;
ALTER DATABASE OPEN
*ERROR at line 1:*
ORA-16014: log 2 sequence# 679 not archived, no available destinations
ORA-00312: online log 2 thread 1:
'D:\ORACLEEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_4JD5RZC0_.LOG'How can I fix this situation?
There are zero files in the
"D:\ORACLEEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\" folder.

Hi,
Check this out...
http://www.beyondoracle.com/2008/10/11/archivelog-ora-16014-log-sequence-not-archived/
Regards,
Levi Pereira

Similar Messages

  • Unable to Connect to Database (after server has been re-started)

    Hi,<BR><BR>I use Essbase Admin Services 7.1.3 and the server was not responding, therefore our IT guys re-started it. Since then I am unable to connect to one of the applications. I have tried to start/Stop the application but it just says "Application did not start". <BR><BR>There are 6 databases in this application altogether (3 data and 2 Currency). The application has been there for about 3 years (it holds historic sales data) and there has never been any problems with it.<BR><BR>When I tried to Start the application, in the message panel at the bottom it says "Error: 1013015 Unable to Connect to Database (2003)". It's as if there is a problem with this 2003 database but the 2002 and 2004 database are OK.<BR><BR>Has anyone ever had this problem and knows how to rectify it??<BR><BR>Thanks in advance.<BR><BR>Sarah

    It may be that the server restart corrupted the database. Go into database properties and uncheck the box start database with application and then try to restart the application.<BR><BR>You can try database validation, or simply reload the database from backup. But you can get the application up and the rest of the databases up by decoupling the database restart from the application start.<BR><BR>There will be a .xcp log file in your application directory (and possibly your database directory.) These might help Hyperion Support in identifying the cause of the problem. But the quickest solution would be to reload the database from backup. The serer and application logs can also provide help in identifying the problem.<BR><BR>One of the keys is to determine why the server is not responding before restarting the server. It may be that a single application is the source of problem. Typical causes are unscheduled free space restructures, application loads of large databases, and an occasional rogue process. Rather than rebooting the entire box, or even restarting the Essbase server, the safest approach is to only kill the job that is soaking up the cpu cycles, causing the server to be unresponsive. This requires identifying the offending process (esssvr.exe) and killingit. Often that restores the rest of the system to operations, giving you time to focus onthe offending application.<BR><BR>You learn thses things when you are pushing the server to its limits (or beyond). If it happens often, you need to do a serious performance audit of teh server and determine whether you need more power and/or memory. <BR><BR>And in any case, you are less likely to encounter this sort of problem in the future if you try to limit applications to a single database. This will allow a better utilization of system resources. And also, it's a good idea to shut down applications that you are not going to be using for a while. Every loaded app takes system resources whether or not it is doing anything. So review your list of apps that start on startup and consider unloading apps after completing batch processing. <BR><BR>Sorry about the long lecture, but I speak from somewhat painful experience. I hope that this will help you.

  • SCOM reports "A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation"

    This was discussed here, with no resolution
    http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
    I have the same issue.  This is a single-purpose physical mailbox server with 320 users and 72GB of RAM.  That should be plenty.  I've checked and there are no manual settings for the database cache.  There are no other problems with
    the server, nothing reported in the logs, except for the aforementioned error (see below).
    The server is sluggish.  A reboot will clear up the problem temporarily.  The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each.  Does anyone have
    any ideas on this?
    Warning ESE Event ID 906. 
    Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file.  This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
    has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)

    Brian,
    We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
    We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
    for the sole purpose of serving as our public folder servers.
    So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
    cache flush to paging file, we got the following alert:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:14 AM
    Event ID:      17012
    Task Category: Storage
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
       at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
    Followed by:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:15 AM
    Event ID:      17106
    Task Category: Storage
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:13:50 AM
    Event ID:      17102
    Task Category: Storage
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action.  This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
    is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
    So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
    actions.
    Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
    Thanks!

  • ESE - Event Log Warning: 906 - A significant portion of the database buffer cache has been written out to the system paging file...

    Hello -
    We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
    Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
    We run nightly backups on both nodes at the Primary Site.
    Node 1 backup covers all mailbox databases [active & passive].
    Node 2 backup covers the Public Folders database.
    The backups for each database are timed so they do not overlap.
    During each backup we get several of these event log warnings:
     Log Name:      Application
     Source:        ESE
     Date:          23/04/2014 00:47:22
     Event ID:      906
     Task Category: Performance
     Level:         Warning
     Keywords:      Classic
     User:          N/A
     Computer:      EX1.xxx.com
     Description:
     Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file.  This may result  in severe performance degradation.
     See help link for complete details of possible causes.
     Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
     Current Total Percent Resident: 26% (110122 of 421303 buffers)
    We've rescheduled the backups and the warning message occurences just move with the backup schedules.
    We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
    I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
    They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
    Backup software is Asigra V12.2 with latest hotfixes.
    We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
    Any suggestions please?
    Thanks in advance

    Having said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
    Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
    This attribute should do it...
    msExchESEParamCacheSizeMax
    http://technet.microsoft.com/en-us/library/ee832793.aspx
    Give me a shout if this is a bad idea
    Thanks

  • A significant portion of the database buffer cache has been written out to the system paging file.

    Hi,
    We seem to get this error through SCOM every couple of weeks.  It doesn't correlate with the AV updates, so I'm not sure what's eating up the memory.  The server has been patched to the latest roll up and service pack.  The mailbox servers
    have been provisioned sufficiently with more than enough memory.  Currently they just slow down until the databases activate on another mailbox server.
    A significant portion of the database buffer cache has been written out to the system paging file.
    Any ideas?

    I've seen this with properly sized servers with very little Exchange load running. It could be a  number of different things.  Here are some items to check:
    Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
    Confirm that the Windows OS is running the recommended hotfixes.  Here is an older post that might still apply to you
    http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
    http://support.microsoft.com/kb/2699780/en-us
    Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more.  Use the PAL tool to collect and analyze the perf data -
    http://pal.codeplex.com/
    Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
    Be sure that the disk are properly aligned -
    http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
    Check that the network is properly configured for Exchange server.  You might be surprise how the network config can cause perf & scom alerts.
    Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
    http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
    Be sure that hyperthreading is NOT enabled -
    http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
    Check that there are no hardware issues on the server (RAM, CPU, etc).  You might need to run some vendor specific utilities/tools to validate.
    Proper paging file configuration should be considered for Exchange servers.  You can use the perfmon to see just how much paging is occurring.
    These will usually lead you in the right direction. Good Luck!

  • Unable to open the database

    I am not able to open the database: Following are the facts and things I have done so far
    1)     database is in NONARCHIVE mode
    2)     database backed up by using datafiles, controlfile, logfiles etc
    3)     datafiles, controlfile and logfiles are out of sync because the time of disk back at night a cron job also update a datafile. So one particular datafile is out of sync.
    4)     When we start the database following error comes
    a.     ora-01207:file is more recent than controlfile - old controlfile Question1: Is it because i did a hot backup and whenever there is a write, the controlfile ...
    5)     when we try use ‘alter database recover we get following
    a.     ora-01157: connot identify/lock data file 21 – DBWR file
    b.     ora-01110: data file 21 ‘…../…/../o2_mf_sys_undo_zvt7qxkf_.dbf
    6)     http://forums.oracle.com/forums/thread.jspa;jsessionid=8d92200830de9b3de878352a40c888bc8a2b521b29d5.e34QbhuKaxmMai0MaNeMb3eKaN90?messageID=1374408&#1374408
    7)     So far I have tried
    a.     RECOVER DATABASE UNTIL CANCEL & ALTER DATABASE OPEN RESETLOGS – doesn’t work
    b.     Alter database archivelog – doesn’t work
    please let know if some had any solution

    Tanwar
    Do the following and you should be fine with your database.
    The error indicates that oracle process cannot open more locks than allowed.
    I assume you are on aix/unix environment
    as oracle user run this command
    #ulimit -a
    and verify the settings. You may want to set all the setting to unlimited as below.
    Some commands may not work depending on your OS, dont worry.
    ulimit -t unlimited
    ulimit -f unlimited
    ulimit -d unlimited
    ulimit -s unlimited
    ulimit -m unlimited
    ulimit -c unlimited
    ulimit -n unlimited
    ulimit -v unlimited
    If still not resolved, ask you system adminitrator to check security setting for oracle user under /etc/security/limits.
    Change the values for oracle user to -1 which means unlimited.
    logout from oracle user session and login again, check ulimit -a. Start your instance and you are good now.
    I have seen this issue many times on many system.
    Good luck.

  • How to settle the expenses after AUC has been capitalised.

    Hi All,
    I have a understanding that after AUC has been settled, we can edit the settlement rule in WBS element and settle directly to the capitalised assets.
    I can not see the settlement rule but investment profile since the project being a Capital comes from Investment measure and automatically created.
    Can you tell me how to create a new settlement rule in this case. It is Urgent
    Regards
    VK

    Hi Vijay,
    This requires a change in the settlement rule since the AUC is no more and it must be the settlement reveiver earlier which should be changed to the main asset after capitalisation.
    Please go to Transaction code CJ20N, select the WBS and go to settlement rule via edit > Cost and set the receiver as the fixed assets itself instead of AUCV since AUC has been transferred to main assets.
    You can then settle using CJ8G or CJ88 but CJ88 is a better option.
    I think, I have answered the same query.
    Regards
    Bharat

  • My computer does not eject the disc after it has been burned.

    I understand that my computer is supposed to automatically eject a disc after it has been burned. Mine doesn't. Instead, I hear a "Plonk," which I assume is announcing that the burning process has been completed.
    First of all, is it true that the disc is supposed to be ejected after the burn has been completed? Secondly, how can I restore that function?

    Diane Wordsmith wrote:
    To clarify...are you importing songs from a CD into iTunes, or are you burning songs from iTunes onto a blank CD? If it's the former, iTunes won't eject it...you'll just hear a chime or noise when it's done. If it's the latter it should eject.
    For the second poster, it sounds you are importing. In which case, you will just hear the sound when importing is finished and you eject the CD yourself.
    Just to make sure that I understand you correctly, here's what I did: Imported songs from website. They were in RealPlayer format. I then converted the song files into MP3 format, and put them into my iTunes music library. Finally, I put the songs in a New Smart Playlist, and burned them into a blank disc.
    Given this situation, should I expect the disc to automatically eject when the burning is competed? If so, what do I have to do to get it to work properly?

  • After upgrading to APEX 4.1 the database management GUI has been removed

    I've succesfully upgrated to APEX 4.1 on my database 10gXE
    The only problem is that now the database management GUI (Usage Monitor section) has been removed from APEX and I'd like to keep on dealing with DBA activities as before. How is it possibile? Now can I handle DBA activities only by using SQL*Plus and SQL Developer?
    Thanks!

    Hello Mark,
    unfortunately, that management GUI is an XE specific application build into the APEX version that shipped with XE. So basically, what you experience is correct: After an APEX upgrade, you can't use it anymore, because the "generic" APEX versions are not branded for XE.
    There is a theoretic possibility of using the command line exporter to export that management application from the old APEX instance before upgrading (or using a different XE instance that still has it) and import it to the upgraded APEX instance. But I would not recommend to do that, as APEX itself has changed a lot from the 2.2-branch that XE 10.2 shipped with to the 4.1 you use now. This can cause severe issues when using that application in 4.1, especially concerning the APEX management part of that application.
    Now can I handle DBA activities only by using SQL*Plus and SQL Developer?I'd recommend that. And I really think SQL Developer has become very handy for that purpose compared with the version that was out when XE 10.2 was released...
    -Udo

  • Error #3125: Unable to open the database file

    Hello, my name is Alex.
    Since the CC 2014 update I am having lots of problems with Muse. I get the Error #3125 quite often, and some pages won't load in Design mode. I tried to open the file from a local drive, but still doesn't work.
    Could you please help out here? thanks a lot in advance.
    Alex

    Hi,
    If this fix doesn't work try this...
    I've been having this problem too. I was using a server-side save location for the site and resources. I moved to my HDD and this seems to have resolved the error for me.
    Let me know if that resolves it.

  • Microsoft Office 2008 Powerpoint "cannot open the file because it has been moved or deleted"

    I am trying to open some slides my professor posted on Desire 2 Learn. It is a .pptx file. Everytime I try to open it I get an error message that says, "Powerpoint cannot access +I (drive) beacuse the file has been moved or deleted"
    I have uninstalled and reinstalled Office. And yes, I know just moving it to the recycling bin does not fully uninstall it. I followed the directions given in a different thread. Any help would be greatly appreciated. Thanks.

    You may get better answers asking this question in Microsoft's support forums.
    Best of luck.

  • Unable to open production database after normal shutdown

    Hi,
    We are using oracle database 10.2.0.3 on AIX 5.3 with ASM.
    I have normally shutdown my database and after that when I am trying to open that i got lots of errors and as a result of that ASM diskgroups got dismounted.
    Our asm instance boots normally and mount the diskgroups but while opening database got following error:
    SQL> startup
    ASM instance started
    Total System Global Area 130023424 bytes
    Fixed Size 2071104 bytes
    Variable Size 102786496 bytes
    ASM Cache 25165824 bytes
    ASM diskgroups mounted
    SQL>
    =============================
    DATABASE STARTUP
    ===============================
    SQL> startup
    ORACLE instance started.
    Total System Global Area 2.0451E+10 bytes
    Fixed Size 2109304 bytes
    Variable Size 1929379976 bytes
    Database Buffers 1.8488E+10 bytes
    Redo Buffers 31444992 bytes
    Database mounted.
    ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
    ORA-01110: data file 1: '+ORADATA/pakedwp/datafile/system.260.644366937'
    ORA-15078: ASM diskgroup was forcibly dismounted
    FOllowing is the alert_log entries details:
    ==================================================
    Errors in file /u01/app/oracle/admin/pakedwp/bdump/pakedwp_dbw0_160228.trc:
    ORA-01157: cannot identify/lock data file 862 - see DBWR trace file
    ORA-01110: data file 862: '+ORADATA/pakedwp/datafile/sor_pre_voice_su_out_dec08w4.2014.672099749'
    ORA-17503: ksfdopn:2 Failed to open file
    +ORADATA/pakedwp/datafile/sor_pre_voice_su_out_dec08w4.2014.672099749
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    Thu Jan 29 20:00:10 2009
    Errors in file /u01/app/oracle/admin/pakedwp/bdump/pakedwp_dbw0_160228.trc:
    ORA-01157: cannot identify/lock data file 863 - see DBWR trace file
    ORA-01110: data file 863:
    '+ORADATA/pakedwp/datafile/sor_msc_cdr_in_dec08w1.2013.672099785'
    ORA-17503: ksfdopn:2 Failed to open file
    +ORADATA/pakedwp/datafile/sor_msc_cdr_in_dec08w1.2013.672099785
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    Thu Jan 29 20:00:10 2009
    Errors in file /u01/app/oracle/admin/pakedwp/bdump/pakedwp_dbw0_160228.trc:
    ORA-01157: cannot identify/lock data file 864 - see DBWR trace file
    ORA-01110: data file 864:
    '+ORADATA/pakedwp/datafile/sor_msc_cdr_out_sep08w1.1550.663694183'
    ORA-17503: ksfdopn:2 Failed to open file
    +ORADATA/pakedwp/datafile/sor_msc_cdr_out_sep08w1.1550.663694183
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    Thu Jan 29 20:00:10 2009
    Errors in file /u01/app/oracle/admin/pakedwp/bdump/pakedwp_dbw0_160228.trc:
    ORA-01157: cannot identify/lock data file 865 - see DBWR trace file
    ORA-01110: data file 865:
    '+ORADATA/pakedwp/datafile/sor_msc_cdr_in_sep08w2.1549.663694199'
    ORA-17503: ksfdopn:2 Failed to open file
    +ORADATA/pakedwp/datafile/sor_msc_cdr_in_sep08w2.1549.663694199
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    Thu Jan 29 20:00:10 2009
    Errors in file /u01/app/oracle/admin/pakedwp/bdump/pakedwp_dbw0_160228.trc:
    ORA-01157: cannot identify/lock data file 866 - see DBWR trace file
    ORA-01110: data file 866:
    '+ORADATA/pakedwp/datafile/sor_msc_cdr_out_sep08w2.1157.663694209'
    ORA-17503: ksfdopn:2 Failed to open file
    +ORADATA/pakedwp/datafile/sor_msc_cdr_out_sep08w2.1157.663694209
    ORA-15001: diskgroup "ORADATA" does not exist or is not mounted
    ORA-15001
    ==============================================
    No Recent changes have been brought down on the storage or system.
    After this failure when I try to mount diskgroup it successfuly mounted again. But again when we try to open database it prompt same errors:
    Does any one have any solution/suggestions?????????

    following are the results of the query:
    NAME PATH HEADER_STATU DISK_NUMBER
    ORADATA_0064 /dev/rhdisk13 MEMBER 64
    ORADATA_0030 /dev/rhdisk40 MEMBER 30
    ORADATA_0031 /dev/rhdisk41 MEMBER 31
    ORADATA_0062 /dev/rhdisk10 MEMBER 62
    ORADATA_0005 /dev/rhdisk11 MEMBER 5
    FLASH_RECO_AREA_0002 /dev/rhdisk12 MEMBER 2
    ORADATA_0006 /dev/rhdisk33 MEMBER 6
    ORADATA_0024 /dev/rhdisk34 MEMBER 24
    ORADATA_0028 /dev/rhdisk38 MEMBER 28
    ORADATA_0029 /dev/rhdisk39 MEMBER 29
    ORADATA_0019 /dev/rhdisk3 MEMBER 19
    ORADATA_0023 /dev/rhdisk4 MEMBER 23
    ORADATA_0000 /dev/rhdisk27 MEMBER 0
    ORADATA_0002 /dev/rhdisk28 MEMBER 2
    ORADATA_0018 /dev/rhdisk29 MEMBER 18
    ORADATA_0017 /dev/rhdisk26 MEMBER 17
    ORADATA_0016 /dev/rhdisk25 MEMBER 16
    ORADATA_0015 /dev/rhdisk24 MEMBER 15
    ORADATA_0014 /dev/rhdisk23 MEMBER 14
    ORADATA_0013 /dev/rhdisk22 MEMBER 13
    ORADATA_0012 /dev/rhdisk21 MEMBER 12
    ORADATA_0011 /dev/rhdisk20 MEMBER 11
    ORADATA_0010 /dev/rhdisk19 MEMBER 10
    ORADATA_0008 /dev/rhdisk17 MEMBER 8
    ORADATA_0007 /dev/rhdisk16 MEMBER 7
    ORADATA_0027 /dev/rhdisk37 MEMBER 27
    ORADATA_0032 /dev/rhdisk42 MEMBER 32
    ORADATA_0065 /dev/rhdisk15 MEMBER 65
    ORADATA_0009 /dev/rhdisk18 MEMBER 9
    ORADATA_0026 /dev/rhdisk36 MEMBER 26
    ORADATA_0025 /dev/rhdisk35 MEMBER 25
    ORADATA_0020 /dev/rhdisk30 MEMBER 20
    ORADATA_0021 /dev/rhdisk31 MEMBER 21
    ORADATA_0022 /dev/rhdisk32 MEMBER 22
    ORADATA_0003 /dev/rhdisk44 MEMBER 3
    ORADATA_0001 /dev/rhdisk43 MEMBER 1
    ORADATA_0004 /dev/rhdisk45 MEMBER 4
    ORADATA_0033 /dev/rhdisk46 MEMBER 33
    ORADATA_0034 /dev/rhdisk47 MEMBER 34
    ORADATA_0035 /dev/rhdisk48 MEMBER 35
    ORADATA_0036 /dev/rhdisk49 MEMBER 36
    ORADATA_0037 /dev/rhdisk5 MEMBER 37
    ORADATA_0038 /dev/rhdisk50 MEMBER 38
    ORADATA_0039 /dev/rhdisk51 MEMBER 39
    ORADATA_0040 /dev/rhdisk52 MEMBER 40
    ORADATA_0041 /dev/rhdisk53 MEMBER 41
    ORADATA_0042 /dev/rhdisk54 MEMBER 42
    ORADATA_0043 /dev/rhdisk55 MEMBER 43
    ORADATA_0044 /dev/rhdisk56 MEMBER 44
    ORADATA_0045 /dev/rhdisk57 MEMBER 45
    ORADATA_0046 /dev/rhdisk58 MEMBER 46
    ORADATA_0047 /dev/rhdisk59 MEMBER 47
    ORADATA_0066 /dev/rhdisk6 MEMBER 66
    ORADATA_0048 /dev/rhdisk60 MEMBER 48
    ORADATA_0049 /dev/rhdisk61 MEMBER 49
    ORADATA_0050 /dev/rhdisk62 MEMBER 50
    ORADATA_0053 /dev/rhdisk63 MEMBER 53
    ORADATA_0054 /dev/rhdisk64 MEMBER 54
    ORADATA_0063 /dev/rhdisk65 MEMBER 63
    FLASH_RECO_AREA_0000 /dev/rhdisk66 MEMBER 0
    ORADATA_0055 /dev/rhdisk67 MEMBER 55
    ORADATA_0056 /dev/rhdisk68 MEMBER 56
    ORADATA_0057 /dev/rhdisk69 MEMBER 57
    ORADATA_0067 /dev/rhdisk7 MEMBER 67
    ORADATA_0078 /dev/rhdisk93 MEMBER 78
    ORADATA_0079 /dev/rhdisk94 MEMBER 79
    ORADATA_0080 /dev/rhdisk95 MEMBER 80
    ORADATA_0081 /dev/rhdisk96 MEMBER 81
    ORADATA_0058 /dev/rhdisk70 MEMBER 58
    ORADATA_0059 /dev/rhdisk71 MEMBER 59
    ORADATA_0060 /dev/rhdisk72 MEMBER 60
    ORADATA_0051 /dev/rhdisk8 MEMBER 51
    ORADATA_0074 /dev/rhdisk89 MEMBER 74
    ORADATA_0075 /dev/rhdisk90 MEMBER 75
    ORADATA_0076 /dev/rhdisk91 MEMBER 76
    ORADATA_0077 /dev/rhdisk92 MEMBER 77
    ORADATA_0070 /dev/rhdisk85 MEMBER 70
    ORADATA_0071 /dev/rhdisk86 MEMBER 71
    ORADATA_0072 /dev/rhdisk87 MEMBER 72
    ORADATA_0073 /dev/rhdisk88 MEMBER 73
    ORADATA_0052 /dev/rhdisk9 MEMBER 52
    ORADATA_0068 /dev/rhdisk83 MEMBER 68
    ORADATA_0061 /dev/rhdisk73 MEMBER 61
    ORADATA_0069 /dev/rhdisk84 MEMBER 69
    84 rows selected.
    SQL>
    I have also check the ownership and its correct.
    I have also check ASMCMD and tried to located files in the diskgroup and all files are there.
    I have also successfully created one test directory in asm diskgroup by mkdir command.

  • HT1097 Unable to export movie "because file _ has been moved"?

    I just made an entire movie in Imovie 08. When I try to export it, I get a box that pops up that says "searching for File (some number like 8 or 10)", at which point the only button I can hit is "stop". Then another one pops up about how it is unable to locate the file because it has been moved, and then I have the option to Search or Cancel. I've tried "searching", but I have no idea what "File 10" or "File 8" are. I have thousands of video files, and none are labeled that way. So I have to hit Cancel and then the whole export is cancelled.
    Its obvously saying my files have been moved, but I have no idea where to...nothing has been moved since I started the project.
    Help. I've never had a problem like this before.

    jonred wrote:
    .. "*Unable to prepare project for publishing*
    The project could not be prepared for publishing because an error occurred. (-41)"
    error-code# -41 indeed indicates a vague 'memory full' error.
    in case, you're using an ext. HDD - is it 'Mac'-formatted? Or fat32, ntfs, etc.?
    but this doesn't mean alone the target-disk, but the internal, 'MacOs'-disk too.
    things to test:
    • simply relaunch your Mac - maybe some 'pointers' are simply .. kaputt ..
    • did you partition your internal drive? - if so, the 'macOs'-partition needs at last 20-40 GBs free for any conversion-process.
    or is it a 'solid' single partition with some tons of free disk-space?

  • Library recovery: how can I recover a library after I get this message: There was an error opening the database for the library "/Users/Jim/Pictures/Libraries/K2 Library.aplibrary"???

    Library recovery: how can I recover a library after I get this message: "There was an error opening the database for the library “/Users/Jim/Pictures/Libraries/K2 Library.aplibrary”???

    Thanks a lot, Frank. The lsregister did the trick! I am testing this on 10.8.2.
    http://support.apple.com/kb/TA24770 : I deleted the "com.apple.LaunchServices.plist", and restarted the Finder, even logged off and on again; did not change anything. The file has not been recreated, so it may not be used anymore.
    http://itpixie.com/2011/05/fix-duplicate-old-items-open-with-list/#.ULZqa6XAoqY
    The direct "copy and paste" from the post did not work: I had to retype it :
    /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchSe rvices.framework/Versions/A/Support/lsregister -kill -r -domain local -domain system -domain user
    but then it worked like a charm!
    Cheers
    Léonie
    And btw: I turned on the "-v" option for the lsregister to see, what was going on, and saw plenty of error messages (error -10811); so I repeated the command with "sudo". After that I still saw five iPhotos. Repeating as regular user finally got rid of the redundant iPhoto entries. It looks like registering as super user may be causing this trouble.

  • UNABLE TO FIX: "There was an error opening the database for the library"

    Like many others I have received the "There was an error opening the database for the library" error message today after a mid-Aperture processing session crash. However, unlike others I have been unable to fix the database...
    I have trashed all prefs, tried merging with a new library, tried repairing permissions, repairing the database, AND rebuilding the database, and have tried disk utility. Nothing works. Repair permissions did it's job, but didn't fix it. Repair Database doesn't even start to work and the Aperture utility quits immediately. Rebuild Database crunches away for about a minute then stops, and eventually quits itself with no error message.
    Aperture works fine with other databases, just not my main one. I'm worried that maybe it won't rebuild because the library is 500 gigs and I only have 200 gigs of free space, but that's pure speculation.
    Anybody out there experienced this same issue and come up with a solution? I know I can pull my masters out of the Aperture package, but I have 500 gigs of edits too... anyway to force rebuild the database by tossing the database files in the package or something?
    Thanks!

    Sorry, maybe I wasn't clear. Even with the 420 gig library on the drive I had 520 gigs free (1TB drive). In any case since I had made a duplicate of the library I began to pull files from the Aperture Library > Database directory and I seemed to have fixed it enough to allow Aperture to Rebuild the Database.
    I ended up tossing all the .plist files and the Library.apdb file in the Database directory within the Aperture Library. After doing so I was able to properly rebuild and all is right with the world.

Maybe you are looking for

  • Create file excel in background

    Hi, I have need to create a excel file from itab when lunch report in background (sm36). I utilized GUI_DOWNLOAD but I have error code page... Help me please..... Thanks!!

  • How to calculate term of aloan

    Hi, Thank you in advance if you could help me. I need to write a trigger that will work out the terms of a loan. To do that I need a function that takes interest rate, number of payments and principal amount and return the term amount. Does such a fu

  • Can older charger be used with MacBook Air?

    Can older charger be used with MacBook Air? Charger with cylindrical magnetic male end was lost. Charger for older white plastic MacBook is available, but is it safe to use with MacBook Air.

  • AppleTV (2nd Gen) sound dropping from iTunes (as remote AirTunes speaker)

    Setup: - iTunes 10.0.1(22) on Mac mini (1st Gen Intel),Mac OS X 10.5.8, 1.000baseT - FritzBox DSL & WLAN router - AppleTV (2nd Gen) Accessing any content (sound, video, pictures) thru AppleTV from Mac mini works fine. Everything play well, no hickups

  • How to repair disk

    When I run "Verify disk" after starting up with installation DVD I receive message "Disk needs to be repaired - invalid volume file count and invalid directory count - use 'Repair Disk'. Problem is the 'Repair Disk' button isn't highlighted. Any idea