Constant mds activity in AFP log

Every user pops out the following errors every two minutes in the AFP log:
IP 192.168.1.xxx - - [18/Apr/2008:15:26:21 -0800] "Delete mds-lock-dir" -5007 0 0
IP 192.168.1.xxx - - [18/Apr/2008:15:26:21 -0800] "Delete master-status" 0 0 0
IP 192.168.1.xxx - - [18/Apr/2008:15:26:21 -0800] "Delete mds-lock-dir-<computername>.local-2c6da58" 0 0 0
AppleFileServer is running a constant 20-35%, which I assume is related to this problem since a similar server in another location is not exhibiting this mds issue in the log, and is running at a nice 0-2% of cpu (both of these are dual-G5s).
Setup is 10.5.2 server, OD replica. I thought this had something to do with Spotlight (mds?), so disabled Spotlight on the share but no difference. I thought it might have to do with the OD replica (master-status?), so rebuilt it by changing it to connected to ds, then back to replica; no change.
Many thanks for any thoughts you might have...
z

Standard indexing behavior I guess

Similar Messages

  • Error while querying ADF form by saved criteria - MDS activated project

    Hi,
    I have a project with MDS activated through database. I'm able to save the customized query criteria and retrieve in the session the criteria is created. But if I log off the session and open another session, I only see the saved criteria name in the pick list. If I select the saved criteria, it throws 'NullPointerException'. I'm using 11.1.2.1.0 version. Here is the complete stack trace:
    [2012-10-10T11:59:59.374-07:00] [ADFAdminServer] [WARNING] [] [oracle.adfinternal.view.faces.lifecycle.LifecycleImpl] [tid: [ACTIVE].ExecuteThread: '13' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: masked] [ecid: 9f22e75039e65be4:-64df6045:13a4bcc092f:-8000-0000000000000122,0] [APP: ViewOnly_Project1_ViewOnly] ADF_FACES-60098:Faces lifecycle receives unhandled exceptions in phase INVOKE_APPLICATION 5[[
    java.lang.NullPointerException
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding$QueryModelImpl.setCurrentDescriptor(FacesCtrlSearchBinding.java:1642)
         at oracle.adfinternal.view.faces.renderkit.rich.query.DefaultQueryOperationListener.processQueryOperation(DefaultQueryOperationListener.java:53)
         at oracle.adf.view.rich.event.QueryOperationEvent.processListener(QueryOperationEvent.java:240)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcast(UIXComponentBase.java:824)
         at oracle.adf.view.rich.component.UIXQuery.broadcast(UIXQuery.java:108)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:1129)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:353)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:204)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:173)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:122)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:468)
         at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:468)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:293)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:199)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:180)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at java.security.AccessController.doPrivileged(Native Method)
         at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
         at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
         at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:94)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3715)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    [2012-10-10T11:59:59.421-07:00] [ADFAdminServer] [ERROR] [] [oracle.adfinternal.view.faces.config.rich.RegistrationConfigurator] [tid: [ACTIVE].ExecuteThread: '13' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: masked] [ecid: 9f22e75039e65be4:-64df6045:13a4bcc092f:-8000-0000000000000122,0] [APP: ViewOnly_Project1_ViewOnly] ADF_FACES-60096:Server Exception during PPR, #1[[
    java.lang.NullPointerException
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding$QueryModelImpl.setCurrentDescriptor(FacesCtrlSearchBinding.java:1642)
         at oracle.adfinternal.view.faces.renderkit.rich.query.DefaultQueryOperationListener.processQueryOperation(DefaultQueryOperationListener.java:53)
         at oracle.adf.view.rich.event.QueryOperationEvent.processListener(QueryOperationEvent.java:240)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcast(UIXComponentBase.java:824)
         at oracle.adf.view.rich.component.UIXQuery.broadcast(UIXQuery.java:108)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:1129)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:353)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:204)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:173)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:122)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:468)
         at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:468)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:293)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:199)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:180)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at java.security.AccessController.doPrivileged(Native Method)
         at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
         at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
         at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:94)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3715)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Edited by: user12004116 on Oct 10, 2012 2:12 PM

    Hi,
    +"But if I log off the session and open another session, I only see the saved criteria name in the pick list. "+
    Does this also happen when the browser is closed and re-opened? If not then what is special on your log-off / log-on method. You should consider filing a service request with customer support if this problem remains
    Frank

  • OES2 SP3 AFP How to empty AFP log file

    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

    If you get a lot of afp activity, your probably best off if you just set the log to rotate.
    create a file under /etc/logrotate.d/ and name it whatever you want.
    then just enter in something like this:
    /var/log/afptcpd/afptcp.log {
    compress
    dateext
    maxage 365
    rotate 99
    size=+4096k
    notifempty
    missingok
    create 644 root root
    postrotate
    /etc/init.d/novell-afptcpd reload
    endscript
    Originally Posted by Skylon5000
    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

  • WRT600N: Diagnosing Constant "Internet" Activity

    I have a WRT600N connected to a Scientific Atlanta Model DPX110 cable modem and I'm seeing constant "internet" activity between my router and the cable modem.  The intenet activity light is banging like crazy on the router and the "PC/LAN" light on the cable modem is also banging like mad.  Have disabled all wrieless interfaces and disconnected every network connection from the router except that of the cable modem.  Have also cycled power on both the cable modem and router a number of times just to make sure nothing is stuck in a wierd state.  Basically, nothing I have done seems to get rid of this constant activity between the router and modem.  I'm pretty sure the activity is not coming in from the WAN side of the modem as I'm not seeing any comparable activity on the modems "Cable/WAN" activity light.  Not sure exactly when the behavior began, but I'm pretty sure this was not happening in the past.  Also upgraded to the latest WRT600N firmware to see if that helps, but it did not.  Any suggestions on how to diagnose what this traffic is between my cable modem and WRT600N short of a network sniffer?  Is there some way to make the router log this activity for me to view?  I initially assumed the activity was being caused by the router, but now am beginning to wonder if if might be the cable modem doing it...  Anyway,  any suggestions appreciated.  Thanks!

    Yeah, I had considered doing that and may still do so shortly.  Was just a bit concerned with bypassing the hardware firewall in case the cable modem has somehow been compromised.  Thought there might be a way to see what the "internet" traffic is before allowing the cable modem direct access to my PC.  Paranoid, I know...
    May try hooking directly to my PC shortly...  but first I believe I'll need to locate and install the original network card to which the cable service is keyed.  Haven't seen it in some time as I've long since had a Linksys cable router with MAC address spoofing in use.  Will provide updates on what I find.
    Thanks for the feedback!

  • ServiceLayer.exe and constant disk activity

    Hi
    I wondered what process was causing HD activity every 5 seconds on my system.
    File Monitor from Sysinternals reported that the servicelayer.exe performs a series of disk operations every 5 seconds, continuously. It tries to create a directory, fails because directory exists, opens PCCSConfig.dat, does an query on the file, performs three read operations, one query, one read etc ... until after 9 information queries and 32 reads the file is closed - and after 5 seconds the whole process is repeated. A File Monitor log file is attached showing the constant disk access.
    Is all that constant disk activity really necessary or is it a bug?
    PC Suite: Version 6.83.14.1
    Connectivity Cable Drivers: Version 6.83.9.0
    PC Connectivity Solution: Version 7.7.10.0
    Cheers,
    -jh
    hemmo
    Attachments:
    ServiceLayerLog.LOG ‏44 KB

    Hi,
    I think that it is "functionality", anyway open Nokia Connection Manager and turn off "Serial Port cable"....
    (at least that stop my PC with PC Suite 6.84....)
    Anyway, you can inform Nokia about this issue if you want

  • Constant Internet Activity ???

    First, please understand I am highly geek-i-ly challenged.  
    I've noticed that there is constant internet activity (connection icons blinking) when I am running NM.
    If I shut NM down, there is no activity.
    Question:  What kind of activity could this be?  Is NM just sending and receiving "confirmed connection" packets or what????
    It seems to send 2 packets and receive 1 or 2 back.  Where is it sending them and who or what is sending what back?????
    I use Windows XP Home (SP3), Linksys WRT54G ver 6, Firmware 1.02.5 hard wired router, Embarq DSL connection with Embarq supplied modem, and have NM version 5.1.9055 installed.
    Any help is appreciated.  Now go have a great day.
    Bill

    Hi, NM sends out packets to your routers and other devices on your network to get the infomation used to build the network map and to monitor your network.
    My Cisco Network Magic Configuration:
    Router: D-Link WBR-2310 A1 FW:1.04, connected to Comcast High Speed Internet
    Desktop, iMac: NM is on the Windows Partition, using Boot camp to access Windows, Windows 7 Pro 32-bit RTM, Broadcom Wireless N Card, McAfee Personal Firewall 2009,
    Mac Partition of the iMac is using Mac OS X 10.6.1 Snow Leopard
    Laptop: Windows XP Pro SP3, Intel PRO/Wireless 2200BG, McAfee Personal Firewall 2008
    Please note that though I am a beta tester for Network Magic, I am not a employee of Linksys/Cisco and am volunteering my time here to help other NM users.

  • Archive all the active online redo logs

    Hi,
    in 9.2.0 and in archivelog mode, how can I archive all the active online redo logs ?
    Thank you.

    Is ur database already running in archivelog mode?? If yes and if automatic archiving is enabled then ur redo will be archived automatically. I think first you need to check whether ur DB is in archive log mode or not?? Post the output of (from sqlplus):
    archive log list
    Daljit Singh

  • Open database if an active online redo log is missing

    Hi,
    Sorry for the rather long post, but I specified all the steps I performed and couldn't make it shorter :-(
    I need an advice on how to open the database if an active online redo log is missing.
    For test purposes I intentionally performed a shutdown abort when the redo log group 1 was in active state and then renamed its only member (REDO01.LOG) so that the database couldn't perform crash recovery using it. Then upon startup I obviously got the message:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: 'H:\ORADATA\TESTDB\REDO01.LOG'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.Ok, so I checked the state of the logs:
    {noformat}
    SQL>SELECT a.GROUP#, first_change#, SEQUENCE#, a.status, SUBSTR(b.MEMBER, 1, 40) MEMBER, b.status mem_status, a.archived
      2    FROM v$log a, v$logfile b
      3   WHERE a.GROUP# = b.GROUP#
      4  ORDER BY a.GROUP#, b.MEMBER;
    GROUP# FIRST_CHANGE#  SEQUENCE# STATUS           MEMBER                         MEM_STA ARC
         1        592134         29 ACTIVE           H:\ORADATA\TESTDB\REDO01.LOG           YES
         2        592268         30 CURRENT          C:\ORADATA\TESTDB\REDO02.LOG           NO
         3        592129         28 ACTIVE           C:\ORADATA\TESTDB\REDO03.LOG           YES
    {noformat}Since opening the database to perform a log switch and thus change the status of the redo log group 1 from ACTIVE to INACTIVE to recreate the member isn't possible, I performed database recovery.
    SQL>recover database until cancel;
    ORA-00279: change 592129 generated at 02/04/2009 10:31:15 needed for thread 1
    ORA-00289: suggestion : C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TESTDB\ARCHIVELOG\2009_02_04\O1_MF_1_28_%U_.ARC
    ORA-00280: change 592129 for thread 1 is in sequence #28
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00279: change 592134 generated at 02/04/2009 10:31:28 needed for thread 1
    ORA-00289: suggestion : C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TESTDB\ARCHIVELOG\2009_02_04\O1_MF_1_29_%U_.ARC
    ORA-00280: change 592134 for thread 1 is in sequence #29
    ORA-00278: log file 'C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TESTDB\ARCHIVELOG\2009_02_04\O1_MF_1_28_4RLR3JS9_.ARC' no longer needed for this rec
    overy
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    'C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TESTDB\ARCHIVELOG\2009_02_04\O1_MF_1_29_4RLR4MF3_.ARC'
    ORA-00279: change 592268 generated at 02/04/2009 10:32:03 needed for thread 1
    ORA-00289: suggestion : C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TESTDB\ARCHIVELOG\2009_02_04\O1_MF_1_30_%U_.ARC
    ORA-00280: change 592268 for thread 1 is in sequence #30
    ORA-00278: log file 'C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TESTDB\ARCHIVELOG\2009_02_04\O1_MF_1_29_4RLR4MF3_.ARC' no longer needed for this rec
    overy
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    'C:\ORADATA\TESTDB\REDO02.LOG'
    Log applied.
    Media recovery complete.
    SQL>So for log sequence #28 I accepted the proposed archived redo log in the FRA, for sequence #29 (that's the online redo log that is missing!) I manually specified its archived copy, and for sequence #30 I specified the CURRENT online redo log. And as it seems the media recovery was successful.
    Next I tried to open the database but again got the error:
    SQL>alter database open noresetlogs;
    alter database open noresetlogs
    ERROR at line 1:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: 'H:\ORADATA\TESTDB\REDO01.LOG'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    The status of the log groups and its members is exactly as it was in the first query I wrote above, i.e. the redo log group 1 is still ACTIVE, so it's needed for crash recovery (which I had already done manually if I understand correctly how Oracle works!). I also checked if the datafiles are inconsistent (described in metalink doc id 1015544.102):
    SQL>SELECT DISTINCT CHECKPOINT_CHANGE#, FUZZY FROM V$DATAFILE_HEADER;
    CHECKPOINT_CHANGE# FUZ
                592269 NOSo, everything seems ok as far as datafile consistency is concerned.
    My question is: how can I rename/drop/clear/whatever the member of redo log group 1 to open the database?
    I tried to rename the log file member, to add another member to it, to open the database with resetlogs, to clear the logfile group 1, but all without success:
    1)
    SQL>alter database clear logfile group 1;
    alter database clear logfile group 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of instance testdb (thread 1)
    ORA-00312: online log 1 thread 1: 'H:\ORADATA\TESTDB\REDO01.LOG'
    2)
    SQL>alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01139: RESETLOGS option only valid after an incomplete database recovery
    3)
    SQL>alter database rename file 'H:\ORADATA\TESTDB\REDO01.LOG' to 'C:\ORADATA\TESTDB\REDO01.LOG';
    alter database rename file 'H:\ORADATA\TESTDB\REDO01.LOG' to 'C:\ORADATA\TESTDB\REDO01.LOG'
    ERROR at line 1:
    ORA-01511: error in renaming log/data files
    ORA-01512: error renaming log file H:\ORADATA\TESTDB\REDO01.LOG - new file C:\ORADATA\TESTDB\REDO01.LOG not found
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    4)
    SQL>alter database add logfile member 'C:\ORADATA\TESTDB\REDO01.LOG' to group 1;
    alter database add logfile member 'C:\ORADATA\TESTDB\REDO01.LOG' to group 1
    ERROR at line 1:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: 'H:\ORADATA\TESTDB\REDO01.LOG'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.Sorry again for the long post and thank you in advance for any suggestion.
    Regards,
    Jure

    You could check if the recovery was complete by (re)creating the controlfile with the resetlogs option.
    <CREATE CONTROLFILE REUSE DATABASE define_db_name RESETLOGS NOARCHIVELOG
    ...>Thanks for the hint. If possible, could you only check if the steps I'm going to perform are ok.
    I did an "alter database backup controlfile to trace;" and then extracted the create controlfile definition part. So in essence I should run the following statements:
    CREATE CONTROLFILE REUSE DATABASE "TESTDB" RESETLOGS  ARCHIVELOG
        MAXLOGFILES 16
        MAXLOGMEMBERS 3
        MAXDATAFILES 100
        MAXINSTANCES 8
        MAXLOGHISTORY 292
    LOGFILE
      GROUP 1 'C:\ORADATA\TESTDB\REDO01.LOG'  SIZE 20M,
      GROUP 2 'C:\ORADATA\TESTDB\REDO02.LOG'  SIZE 20M,
      GROUP 3 'C:\ORADATA\TESTDB\REDO03.LOG'  SIZE 20M
    -- STANDBY LOGFILE
    DATAFILE
      'C:\ORACLE\PRODUCT\10.2.0\ORADATA\TESTDB\SYSTEM01.DBF',
      'C:\ORACLE\PRODUCT\10.2.0\ORADATA\TESTDB\UNDOTBS01.DBF',
      'C:\ORACLE\PRODUCT\10.2.0\ORADATA\TESTDB\SYSAUX01.DBF',
      'C:\ORACLE\PRODUCT\10.2.0\ORADATA\TESTDB\USERS01.DBF'
    CHARACTER SET EE8MSWIN1250
    ALTER DATABASE OPEN RESETLOGS;
    ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\TESTDB\TEMP01.DBF' REUSE;Is that correct?
    About the RMAN backups: Wouldn't a 'CATALOG RECOVERY AREA' populate the controlfile with backup information again (I'm not using a recovery catalog in this case)?
    Thanks for the help!
    Regards,
    Jure

  • When occurs crash recovery,why use active online redo log not archived log?

    If current redo log had archived, but it's still 'ACTIVE'. As we all know, archived log is just an archived copy of the current redo log which is still 'ACTIVE', they have the same data. But why use active online redo log not archived log for crash recovery?(I think, if crash recovery can use archived log, then whether the online redo log is 'ACTIVE' or not, it can be overwritten)
    Quote:
    Re: v$log : How redo log file can have a status ACTIVE and be already archived?
    Hemant K Chitale
    If your instance crashes, Oracle attempts Instance Recovery -- reading from the Online Redo Logs. It doesn't need ArchiveLogs for Instance Recovery.
    TanelPoder
    Whether the log is already archived or not doesn't matter here, when the instance crashes, Oracle needs some blocks from that redolog. Archivelog is just an archived copy of the redolog, so you could use either the online or achive log for the recovery, it's the same data in there (Oracle reads the log/archivelog file header when it tries to use it for recovery and validates whether it contains the changes (RBA range) in it what it needs).

    Aman.... wrote:
    John,
    Are you sure that the instance recovery (not the media recovery) would be using the archived redo logs? Since the only thing that would be lost is the isntance, there wouldn't be any archived redo log generated from the Current redo log and the previous archived redo logs, would be already checkpointed to the data file, IMHO archived redo logs won't participate in the instance recovery process. Yep, shall watch the video but tomorrow .
    Regards
    Aman....
    That's what I said. Or meant to say. If Oracle used archivelogs for instance recovery, it would not be possible to recover in noarchive log mode. So recovery relies exclusively on the online log.
    Sorry I wasted your time, I'll try to be less ambiguous in future

  • AFP Logs don't include file path

    Is there a way to get AFP logging to include the path to the files being referenced? Possibly a hidden property in com.apple.AppleFileServer.plist or a 3rd party logging solution?
    Currently, and as it appears even in Snow Leopard Server, AFP only logs the names of the files being accessed, created, and deleted. Since there are numerous files with the same name on most servers, like .DS_Store, the path is required to make the logs really useful.

    On a Leopard Server, com.apple.AppleShareClient.plist already has AFP debugging enabled with the following properties set; afpdebuglevel 6, afpdebugsyslog YES. All thats left is to create a place for the debug records to be dropped.
    Unfortunately this doesn't add any useful information on the server side of things, at least not specific to who opens which Document1.doc.
    It is odd that the standard AFP log doesn't include the file's path. Who wouldn't want to know which .DS_Store was opened or where that 'Untitled Folder' was created?

  • Constant disk activity; who is doing that?

    Hello there,
    I pimped my G5 with two new internal drives (WD 1TB Caviar Black) and copied onto them clones of other disks and volumes (with SuperDuper), including my old system. Seems to be quite succesful operation; nearly all works as usual again.
    Of course I expected mdimport to do some heavy indexing, with constant disk activity as a result.
    But, after a couple of days, there is still constant disk activity: every second or so some 5,99MB is being written to disk, according to Activity Monitor.
    I do not understand which app is doing this, for no one seems to be very busy at all.
    Any ideas out there?
    Appreciated,
    Marius

    I just dealt with this problem effectively on my Mac Pro (Snow Leopard 10.6.4), after considerable mucking around. Hard drive noise, flashing lights, system slow-downs, all driving me nuts (admittedly, a short drive).
    Turned out to be Spotlight. As it happens I have a WD Studio 1-terabyte Firewire drive that's always hooked up. I back up to it several times a day when I'm on a project, and it has clones of my application drive and three data drives, all visible on my desktop. I also have a separate Windows drive that's visible. Recently, for reasons known only to Apple, I'm sure, Spotlight began indexing and re-indexing the cloned logical drives on the WD Studio. Never stopped -- as soon as it finished, it would start again. I simply went into the Privacy tab of the Spotlight preference pane from System Preferences, made sure that all my drives appeared in the box, and then removed all four cloned drives and the Windows drive so that Spotlight would no longer index them.
    Problem solved. If you have extra physical or logical drives beyond those actually in your system (as opposed to Firewire or USB drives), try removing them and re-adding them in the Privacy tab of the Spotlight preference pane. Then wait for Spotlight to re-index the drives -- could take a bit of time, and the drives will keep accessing until re-indexing is complete. If that works, you're good. If it doesn't and the process just keeps going, you might have to remove permanently the external drives and clones from the list of drives to be indexed. That's what I ended up having to do.
    Hope this helps.

  • No activity on smspxe.log and I am not getting very far when testing OS deployment

    Hello and thanks for any help. I have a few OS deployment issues.
    1. There is no activity on smspxe.log. This has been a problem for two weeks. I believe it is a symptom of trying to fix a bigger issue (#2)
    2. I am trying to move us from SCE imaging to SCCM 2012. We had CM working at first but had to switch back. Now when I try to do a PXE boot, I see this message.
    Recovery
    Your PC needs to be repaired
    The Windows Boot Configuration Data file from the PXE server does not contain a valid operating system entry. Ensure that the server has boot images installed for this architecture.
    File:\Tmp\x86x64{BBD9AF6-E64E-41A9-840504118DB33B45}.bcd
    Error code: 0xc0000098
    I have tried these steps -  disable PXE on the DP (which is also the MP), remove WDS, I hid the remote install folder, restarted server, installed the WDS role. It wouldn't start so I did wdsutil initialize. Enabled PXE. Maybe important to note; before
    this the remoteinstall share was RemoteInstall. Now it is "REMINST".
    I have boot images for both i386 and x64.
    Thanks so much for any help.

    Hi,
    You could try the steps in the blogs below(I know that you have tried some steps). Please make sure the RemoteInstall folders have been populated.
    Quote:
    Go to the Properties of the DP and uncheck the box "Enable PXE support for clients"
    Wait for a short time while Windows Deployment Services uninstalls
    Reboot the DP
    Re-enable PXE support and wait while WDS re-installs
    WDS usually requires a reboot so you should reboot again
    ConfigMgr 2012 / SCCM 2012 Task Sequence fails with BCD error
    Quote:
    My initial thoughts were to try and redistribute the boot images but this didn’t work for me. So I removed the PXE role from the DP in question, waited for the success message to appear under the DP monitoring and rebooted the server. After rebooting the
    server I enabled the PXE role on the DP, this time leaving Unknown Computer Support enabled. I waited for the success message in the DP monitoring and rebooted the server. After the reboot I made a point of checking the status of the WDS role in Server Manager
    (all was well). I also checked the RemoteInstall folder had the same contents as other working DP’s and tried again. Hey presto problem solved!
    WDS Recovery Error (BCD) 0xc0000098 SCCM 2012
    Note: Microsoft provides third-party contact information to help you find
    technical support. This contact information may change without notice. Microsoft does not guarantee the accuracy of this third-party contact information.
    Best Regards,
    Joyce
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Why is there now unusually high, constant disk activity on my Mac?

    Firefox 9.0.1 on Mac OS X 10.6.8 is churning my HD and then must be force quit. It seems to run but there's definately something new and unusual happening here.

    Thanks for taking the time to reply.
    You are right in that there are a lot of guides around to creating "fake" RAC clusters, however Hunter's guide appears to be one of the best and I found it while browsing around the Oracle website. Of course Oracle stress that it is entirely unsupported and for "education purposes" :)
    Well education is what I'm after and I accept that it would perform nowhere near a real cluster. Still the performance issues I'm having appear abnormal.
    The background to this is that one the applications I manage at work (Vmware) has a backend DB hosted on RAC. There is an entire database team who look after this area. For my own personal edification I'd like to learn a bit more about RAC (and Oracle in general) and get my hands dirty, though I currently have no long term aspirations to become a full fledged DBA. I'm working through a text on Safari and I thought it would be useful to create a "fake" as you put it cluster at home and play around a bit. Unfortunately the resources to create a "proper" environment are currently not within my means:(
    With regards to my problem I was hoping someone here has played around with a similar setup and could advise whether they had similar issues. The 3 days or so I spent labouring through the 60 page guide was far more educational than just reading a text.
    I've found that the excessive disk activity appears to be down to my server and Vmware ESXi rather than Oracle. I stopped all instances and CRS and the constant disk activity still continued. I had taken number of snapshots of each linux node during my installation and I'm currently deleting these to see whether this improves things. My hard disk could well be the bottleneck as ESXi is really meant for SCSI controllers and disks if using local datastores.
    I'll take your advice and have a trawl through Ebay. I will also install standalone oracle on a seperate box and have a play with that.
    Peter

  • T520 Constant disk activity

    T520, 4 GB RAM, 500 GB hard drive, Win 7 Pro 64 bit, 
    Do any fellow T520 users see constant, about one flash per second, hard disk activity. I've turned off the usual suspects, including Windows Indexing Service and all the Intel security and management services and uninstalled Symantec Endpoint Protection 12, which has also been mentioned in certain articles about disk thrashing.
    The side effects of this constant disk activity is that the computer will not enter sleep or hibernate due to inactivity, although I can force it manually.I'm also worried about premature disk wear.
    Any hints would be gretly appreciated.
    T520 4239-CTO
    T61/p 6459-CTO (Gone but not forgotten)
    A31/p XP Pro 1 gig memory
    A30/p XP Pro 1 gig memory
    TP600 Win 2K 288 mb memory
    701C Win 98 Don't ask

    HI mixz1,
    I can't answer your question directly, but on my T400 running Win7 Pro 64 I also see the HDD LED flash ~ once per second.  My guess it's monitoring drive status, not actually doing reads or writes.  Even so, the machine sleeps as expected.
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • AFP log question

    Perhaps due to being both paranoid and ignorant about OS X Server, I'm worried that I might have be "hacked".
    I was looking at the Log section in AFP of "Server Admin" and I saw lines like:
    IP 192.168.0.2 - - [04/Jul/2006:20:48:17 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:17 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:17 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:17 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:18 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:18 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:18 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:20:48:18 0000] "OpenFork print costs billed" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:22:40:29 0000] "OpenFork Important passwords" 0 0 0
    IP 192.168.0.2 - - [04/Jul/2006:22:40:29 0000] "OpenFork Accounts info" 0 0 0
    Now, I hadn't opened those files. However, I had opened the folder that those files resided. Why would the AFP log show that I've done something to those files? Does the mere act of opening their folder show them in the log? Or, have I been hacked?

    The information on this page suggests that log entries like those appear every time someone opens a folder.
    (13891)

Maybe you are looking for