Wldf 9.2 diagnostic archive retirement

Hi to all,
Is there a way to retire diagnostic data from weblogic 9.2 wldf WLS_DIAGNOSTICS000000.DAT? I am planning to setup a diagnostic module on our server and wondering if this diagnostic data can be retired, I have seen this feature in weblogic 10, Retiring Data from the Archives and wondering if there is an equivalent for this in weblogic 9.2. Eventually this diagnostic will grow and we need to manage this growth.
regards.

Hello,
WLS 9.x did not provide a configuration based data retirement feature. However, it provided operations on runtime mbeans which could be used to delete selected records. You can periodically execute a WLST script to remove older data. For example:
import sys
# Usage:
# java weblogic.WLST delete_old_data.py url username password keep_days
# eg, following will delete data older than 2 days from the harvester and events
# archives:
# java weblogic.WLST delete_old_data.py t3://localhost:7001 weblogic weblogic 2
def getParam(pos, default):
value=None
try:
value=sys.argv[pos]
except:
value=default
return value
url = getParam(1, "t3://localhost:7001")
user = getParam(2, "weblogic")
password = getParam(3, "weblogic")
days = getParam(4, 1)
try:
connect(user,password,url)
now=System.currentTimeMillis()
bound=(now - days*3600*24*1000)
serverRuntime()
cd('/WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/WLDFDataAccessRuntimes/HarvestedDataArchive')
print "Deleting records older than", days, " days from Harvester data archive"
deleted=cmo.deleteDataRecords(0L, bound, "")
print "Deleted ", deleted, " record(s) from Harvester data archive"
cd('/WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/WLDFDataAccessRuntimes/EventsDataArchive')
print "Deleting records older than", days, " days from Events data archive"
deleted=cmo.deleteDataRecords(0L, bound, "")
print "Deleted ", deleted, " record(s) from Events data archive"
except:
print "exception found"
Hope this helps.
/Raj

Similar Messages

  • Disable or reduce size of diagnostic archive?

    Hi,
    I found that the WLDF is used by default (I have not configured anything explicitly) and might consume a significant amount of disk space in the default location relative to the domain directory.
    Is it possible to disable the WLDF or at least to reduce the size of the diagnostic data?
    Cheers,
    Thorsten

    Hi Thorsten,
    By default, you have to deploy a WLDF configuration to a server, and be actively harvesting RuntimeMBean metrics and/or generating Instrumentation events in order for the archive to consume any disk space. If you are not doing any of these things, then the archive won't grow in size.
    If you do begin using the archive, you can run a WLST script to remove old data. This can be run as part of a cron job, for example, on the machine running a server configured with WLDF active. Below is an example of such a script.
    Hope this helps,
    Mike Cico
    import sys
    # Usage:
    # java weblogic.WLST delete_old_data.py url username password keep_days
    # eg, following will delete data older than 2 days from the harvester and events
    # archives:
    # java weblogic.WLST delete_old_data.py t3://localhost:7001 weblogic weblogic 2
    def getParam(pos, default):
    value=None
    try:
    value=sys.argv[pos]
    except:
    value=default
    return value
    url = getParam(1, "t3://localhost:7001")
    user = getParam(2, "weblogic")
    password = getParam(3, "weblogic")
    days = getParam(4, 1)
    try:
    connect(user,password,url)
    now=System.currentTimeMillis()
    bound=(now - days*3600*24*1000)
    serverRuntime()
    cd('/WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/WLDFDataAccessRuntimes/HarvestedDataArchive')
    print "Deleting records older than", days, " days from Harvester data archive"
    deleted=cmo.deleteDataRecords(0L, bound, "")
    print "Deleted ", deleted, " record(s) from Harvester data archive"
    cd('/WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/WLDFDataAccessRuntimes/EventsDataArchive')
    print "Deleting records older than", days, " days from Events data archive"
    deleted=cmo.deleteDataRecords(0L, bound, "")
    print "Deleted ", deleted, " record(s) from Events data archive"
    except:
    print "exception found"

  • Looking to "Archive" unused Accounts

    Hello,
    We have approximately 4,000 Accounts that we don't use. There is no activities/tasks associated with them so we'd like to archive/retire them. I thought about identifying these accounts, exporting them to Excel/Access and keep them there so if they're ever needed we can refer to them.
    Does anybody know of a better way to archive accounts?
    Thanks!

    I would keep them in the system as removing them may effect reporting. IF you no longer wish for them to appear within search results you could assign them to a "archive" book which only selected users can access. If you are workied about space i think the rule is around 100mb per full license.

  • WLS_DIAGNOSTICS0~.DAT files occupying more space

    Hi,
    WLS_DIAGNOSTICS0~.DAT files are occupying more space, due to this server disk is filling quickly, please let me know what is the cause
    this is weblogic 9.1 , oracle 9i,
    System = SunOS
    Release = 5.10
    KernelID = Generic_118833-36
    Machine = sun4v
    BusType = <unknown>
    Serial = <unknown>
    Users = <unknown>
    OEM# = 0
    Origin# = 1
    NumCPU = 32
    Thanks
    Hanuman
    Edited by: user9166997 on 11-Apr-2011 06:48
    Edited by: user9166997 on 11-Apr-2011 06:54

    Hi Hanuman,
    Try using the below elements, so it could be that your file is growing very fast in this interval.
    <store-size-check-period> sets the interval at which the <preferred-store-size-limit> is checked if the size is exceeded.
    From information do have a look at the below link in which René van Wijk has explained it.
    [WLS - 10.3] - Issue with Diagnostic Archive.
    Topic: Retiring Data from the Archives
    http://download.oracle.com/docs/cd/E11035_01/wls100/wldf_configuring/config_diag_archives.html#wp1069508
    Regards,
    Ravish Mody
    http://middlewaremagic.com/weblogic
    Come, Join Us and Experience The Magic…

  • WebLogic does not start due to issues with WLS_DIAGNOSTICS000000.DAT

    My customer is getting the following exception everytime he tries to restart his WebLogic services:
    ####<Jun 16, 2009 9:23:46 AM MST> <Critical> <WebLogicServer> <hqpsfindev> <PIA> <main> <<WLS Kernel>> <> <> <1245169426020> <BEA-000362> <Server failed. Reason:
    There are 1 nested errors:
    weblogic.diagnostics.lifecycle.DiagnosticComponentLifecycleException: weblogic.store.PersistentStoreException: [Store:280020]There was an error while reading from the log file
         at weblogic.diagnostics.lifecycle.ArchiveLifecycleImpl.initialize(ArchiveLifecycleImpl.java:44)
         at weblogic.diagnostics.lifecycle.DiagnosticFoundationService.start(DiagnosticFoundationService.java:107)
         at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
    Caused by: weblogic.store.PersistentStoreException: [Store:280020]There was an error while reading from the log file
         at weblogic.store.io.file.Heap.getNextRecoveryFile(Heap.java:794)
         at weblogic.store.io.file.Heap.open(Heap.java:181)
         at weblogic.store.io.file.FileStoreIO.open(FileStoreIO.java:85)
         at weblogic.store.internal.PersistentStoreImpl.open(PersistentStoreImpl.java:353)
         at weblogic.store.PersistentStoreManager.createFileStore(PersistentStoreManager.java:202)
         at weblogic.diagnostics.archive.DiagnosticStoreRepository.getStore(DiagnosticStoreRepository.java:61)
         at weblogic.diagnostics.lifecycle.ArchiveLifecycleImpl.initialize(ArchiveLifecycleImpl.java:42)
         ... 4 more
    Caused by: java.io.IOException: Error reading from file, Reached the end of the file., errno=38
         at weblogic.store.io.file.direct.DirectIONative.read(Native Method)
         at weblogic.store.io.file.direct.DirectFileChannel.read(DirectFileChannel.java:133)
         at weblogic.store.io.file.StoreFile.read(StoreFile.java:281)
         at weblogic.store.io.file.Heap.getNextRecoveryFile(Heap.java:792)
         ... 10 more
    He has narrowed this down to the following: He deletes
    ...\domain\servers\WebLogicAdmin (or managed server)\data\store\default\XXXXX.dat (WLS_DIAGNOSTICS000000.DAT). He also deletes the .dat file under diagnostics and then the service starts without issue.
    Can't be doing this every time he bounces his server.
    Does this call stack indicate lack of disk space?
    Caused by: java.io.IOException: Error reading from file, Reached the end of the file., errno=38
    Is there a way to disable diagnostics so this files don't get generated?
    Thanks!

    Diagnostics is normally disabled by default, but I recall that there was a bug a long while back (since fixed) that left it on by default. The following thread might help shed some light on the issue:
    Disable or reduce size of diagnostic archive?
    Regardless, the unexpected low level exception could be an indication of a bug in the WebLogic file store code. The only time I'd expect to see this is if somehow there are two processes that are both using the same file at the same time (on most operating systems the store lock's its files to as a safety precaution to help prevent customers from making this mistake).
    Tom

  • Using of universallogcollector.pl with -afterdate throws error

    Reference:  11gR2 GI/ASM/RAC Universal Collection Guide (Doc ID 1485042.1)
    11gR2 Universal Collection is expanded diagcollection.pl to collect GI, ASM and database (RAC) diagnostics (logfile, trace file etc), the goal is to reduce back and forth information request
    between Oracle Support and customers.
    Once finishes, it will generate files in current directory. Only one file needs to be uploaded per node, the name of the file is highlighted on screen and
    default to allData_<nodename>_<timestamp>.tar.gz
    When I tried with below command I am getting the error as;
      /exports/universallogcollector/universallogcollector.pl  --orahome --afterdate '09/07/2013'
    root@dbsrvr1/exports/universallogcollector/test_run/run2>/exports/universallogcollector/universallogcollector.pl --collect --orahome --afterdate '09/07/2013'
    Production Copyright 2004, 2010, Oracle. All rights reserved.
    Universal Log Collector tool Version 1.4
    The following diagnostic archives will be created in the local directory if it's not excluded.
    etcData_dbsrvr1_20130911_0035.tar.gz -> oraInst.loc, oratab and /etc/oracle or /var/opt/oracle(platform dependent).
    crshomeData_dbsrvr1_20130911_0035.tar.gz -> logs, traces and cores from CRS Home.
                                                 Note: core files will be packaged only with the --core option.
    ocrData_dbsrvr1_20130911_0035.tar.gz -> ocrdump, ocrcheck etc.
    chmosData_dbsrvr1_20130911_0035.tar.gz -> Cluster Health Monitor (OS) data.
    coreData_dbsrvr1_20130911_0035.tar.gz -> contents of CRS core files in text format.
    osData_dbsrvr1_20130911_0035.tar.gz -> logs from Operating System.
    baseData_dbsrvr1_20130911_0035.tar.gz -> logs from CRS Base & Oracle Base(s).
    invtData_dbsrvr1_20130911_0035.tar.gz -> logs from Oracle installation log.
    orahomeData_dbsrvr1_20130911_0035.tar.gz -> logs from Oracle Home(s) log.
    sysconfig_dbsrvr1_20130911_0035.txt -> system config info for cpu, memory, swap, network and disks.
    crsresStatus_dbsrvr1_20130911_0035.txt -> outputs from "crsctl stat res -t -f [-init]"
    vendorData_dbsrvr1_20130911_0035.tar.gz -> vendor clusterware logs if present.
    acfsData_dbsrvr1_20130911_0035.tar.gz -> logs from acfs log.
    rdbmsData_dbsrvr1_20130911_0035.tar.gz -> RDBMS traces and alert logs.
    allData_dbsrvr1_20130911_0035.tar.gz -> a summary tarball for all above logs.
    Collecting CRS home data
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    tar: a: unknown option
    Collecting information from core dump files
    No corefiles found
    Collecting OCR data
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    gzip: ocrData_dbsrvr1_20130911_0035.tar: No such file or directory
    Collecting Etc Oralce data
    cp: /var/opt/oracle/oprocd/check/port: Operation not supported on socket
    cp: /var/opt/oracle/oprocd/stop/port: Operation not supported on socket
    cp: /var/opt/oracle/oprocd/fatal/port: Operation not supported on socket
    Collecting CRS base & Oracle base(s) data
    CRS base not specified or invalid, will try to get correct CRS base
    Get valid CRS base "/app/oracle" and will collect it.
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting Oracle home data from "/app/crs/product/11g"
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting Oracle home data from "/app/oracle/product/11g/asm_1"
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting Oracle home data from "/app/oracle/product/11g/db_1"
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting Oracle home data from "/app/oracle/product/em/agent11g"
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting Oracle home data from "/app/oracle/product/11.2.0.3/db"
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting OS logs
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting Oracle installation logs
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting vendor cluster logs
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Collecting sysconfig data
    Collecting CRS resource status
    Collecting RDBMS traces and alert logs
    tar: a: unknown option
    tar: usage  tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
    Done
    #########Universal Log Collection Finished.#######
        Please upload ONLY allData_dbsrvr1_20130911_0035.tar.gz to Oracle Support!
    root@dbsrvr1/exports/universallogcollector/test_run/run2>

    Could you explain what do you want to fix?
    I think you should read StockAccess's javadoc.
    It should describe how you can use it.

  • SOA Server Diagnostic log files Archive Directory

    Hello all,
    we know that the soa server logs would be there under servers/soa_server1/logs/
    So, as the file size grows i mean the soa_server1-diagnostic.log, they should be kept somewhere in the archive right...so where should i set the directory for archive files...and how to set the size of the archive files...can anyone please tell me how to do this...may be a documentation on that would be very helpful for me...
    Thanks,
    N

    Hi Naresh,
    If i understand correctly u want to rotate the log files based on size in soa 11g.
    If so, log into EM console. Right click on soa-infra->Logs ->Log configuration.
    Select the Log Files tab. Choose odl-handler and edit the configuration. You can fix the rotation of log files based on size as well as time.
    The log path would be $user_projects/domains/domain_name/servers/soa_server1/logs/soa_server1-diagnostic.log.
    Hope this helps u

  • How to retire/archive/status a released workbench transport for an OSS note

    My Question:  Is there a best practice for the "retiring" or "archiving" or "statusing" of released workbench transports for OSS notes that have been imported into the quality environment, but should never be imported into the production environment? 
    For example an OSS note is applied and imported into QS1, but then a CRT Hotpack is applied as an entire patch upgrade, so that the prior individual transport is no longer needed.  Best Business practice indicates to not delete a released transport, but unless there is a way to permanently segregate them from the remaining legitimately viable transports, there is always a risk that the "stale" code could inadvertently slip through thus overwriting good code.

    Hello.
    Well, you have confirmed what I suspected when I read the descriptions - wrong forum. 
    I chuckled and couldn't stop a broad smile from creeping across my face when I read your response though:  "the old BASIS type questions".  I knew we were a bit behind the times here; you have definitely confirmed this suspicion.
    Thank you again.  I'll try another search of the OSS notes or put in a customer message just for the heck of it.

  • WLS 10.3.0 - Scheduled Custom data retirement policy not running

    I am trying to use a custom retirement policy and scheduling it to run every hour. When I run it manually from the console it works, (I can see log message about the policy being run) but the scheduled run never happens.
    I have other policy for HarvestedDataArchive scheduled with the same parameters and that is running as exepected.
    Here is the excerpt from config.xml file
    <server-diagnostic-config>
    <diagnostic-store-dir>data/store/diagnostics</diagnostic-store-dir>
    <diagnostic-data-archive-type>FileStoreArchive</diagnostic-data-archive-type>
    <data-retirement-enabled>true</data-retirement-enabled>
    <preferred-store-size-limit>100</preferred-store-size-limit>
    <store-size-check-period>1</store-size-check-period>
    <wldf-data-retirement-by-age>
    <name>DataRetirementPolicy-1</name>
    <enabled>true</enabled>
    <archive-name>HarvestedDataArchive</archive-name>
    <retirement-time>0</retirement-time>
    <retirement-period>1</retirement-period>
    <retirement-age>744</retirement-age>
    </wldf-data-retirement-by-age>
    <wldf-data-retirement-by-age>
    <name>DrpOsbAlert</name>
    <enabled>true</enabled>
    <archive-name>CUSTOM/com.bea.wli.monitoring.alert</archive-name>
    <retirement-time>0</retirement-time>
    <retirement-period>1</retirement-period>
    <retirement-age>744</retirement-age>
    </wldf-data-retirement-by-age>
    </server-diagnostic-config>
    Is there any known issues with custom policies, am I missing something in the configuration?
    Thanks a lot
    Juan

    I am trying to use a custom retirement policy and scheduling it to run every hour. When I run it manually from the console it works, (I can see log message about the policy being run) but the scheduled run never happens.
    I have other policy for HarvestedDataArchive scheduled with the same parameters and that is running as exepected.
    Here is the excerpt from config.xml file
    <server-diagnostic-config>
    <diagnostic-store-dir>data/store/diagnostics</diagnostic-store-dir>
    <diagnostic-data-archive-type>FileStoreArchive</diagnostic-data-archive-type>
    <data-retirement-enabled>true</data-retirement-enabled>
    <preferred-store-size-limit>100</preferred-store-size-limit>
    <store-size-check-period>1</store-size-check-period>
    <wldf-data-retirement-by-age>
    <name>DataRetirementPolicy-1</name>
    <enabled>true</enabled>
    <archive-name>HarvestedDataArchive</archive-name>
    <retirement-time>0</retirement-time>
    <retirement-period>1</retirement-period>
    <retirement-age>744</retirement-age>
    </wldf-data-retirement-by-age>
    <wldf-data-retirement-by-age>
    <name>DrpOsbAlert</name>
    <enabled>true</enabled>
    <archive-name>CUSTOM/com.bea.wli.monitoring.alert</archive-name>
    <retirement-time>0</retirement-time>
    <retirement-period>1</retirement-period>
    <retirement-age>744</retirement-age>
    </wldf-data-retirement-by-age>
    </server-diagnostic-config>
    Is there any known issues with custom policies, am I missing something in the configuration?
    Thanks a lot
    Juan

  • WLDF - Instrumentation for EJB call statistics

    Hello,
    I'm new with Weblogic and I'm looking for stats concerning my ejbs. I'm using weblogic 10.3.
    With JMX I have only found data concerning EJB pool size but no statistics (like execution time)
    thus, I'm looking for instrumentation using WLDF to see if it is possible to get the execution time of an EJB method.
    Firstly, I try to instrument a method, i've used the following weblogic-diagnostic.xml file.
    <wldf-resource xmlns="http://www.bea.com/ns/weblogic/90/diagnostics">
         <instrumentation>
         <enabled>true</enabled>
         <include>com.gemalto.*</include>
         <wldf-instrumentation-monitor>
         <name>ConfigurationManagerBean_Monitor</name>
    <enabled>true</enabled>
         <action>DisplayArgumentsAction</action>
    <location-type>before</location-type>
    <pointcut>execution(public * com.xxx.* get*(...))</pointcut>
         </wldf-instrumentation-monitor>
         </instrumentation>
    </wldf-resource>
    When I put this file into my MEAT-INF EAR, the deploying is OK but I'cant see nothing into WLDF console extension ?
    Could you please explain me a little bit how to configure correctly my instrumentation ?
    Thank you a lot.
    C.

    When I put this file into my MEAT-INF EAR, the deploying is OK but I'cant see nothing into WLDF console extension ?Are there events in the WLDF Archive for the deployed monitor? Have you enabled Instrumentation through a WLDF System Resource targeted to the server?
    In order to view instrumentation data through the console extension,
    - the WLDF DyeInjection monitor needs to be deployed through the WLDF SystemResouce at server scope as well.
    - the application monitors must be of the "Around" type with the TraceElapsedTimeAction assigned to them
    Then the console extension can build a call-tree of known requests that have passed through the server, based on each request's Diagnostic Context ID.
    If you truly want to view information from the DisplayArgumentsAction, you will need to view the data stored in the WLDF Archive using the WLS Console, or WLST. In the Console, you can view the data by navigating to Diagnostic Modules -> Logs and selecting the EventsDataArchive "log". On the resulting page you can customize your views (the default view only shows you the last 5 minutes of data in the archive, I believe).
    Using WLST you can use the exportDiagnosticData (offline) or exportDiagnosticDataFromServer (online) functions. See the WLST help on these functions for details on how to use them.
    Mike

  • Weblogic 9.0  WLDF Instrumentation

    I am tryin to understand WLDF Instrumentation capabilities by looking at MedRecServer --> MedRecWLDF--> Instrumentation --> DyeInjection configuration.
    It loooks like that DyeInjection monitor sets DiagnosticContext when request enters into the weblogic server with following properties.
    ADDR1=127.0.0.1
    USER1=[email protected]
    I have executed request with above properties in medrec application.
    What I am trying to find is How to access diagnostic context/data/information as result of the execution of DyeInjection monitor ?
    Thank you,
    -Jayesh

    Hi Jayesh,
    Instrumentation ("event") data is stored in the WLDF archive, in a binary store. This data can be accessed viewed via the WLS Console, via the WLST "exportDiagnosticData" command, and programmatically via the WLDF Accessor APIs.
    To view data via the console, log into the console and choose
    Diagnostics -> Log Files
    select the "EventsDataArchive" radio button, and click the "View" button. You can also specify custom query parameters via the "Customize this table" link.
    See
    http://e-docs.bea.com/wls/docs90/wldf_configuring/access_diag_data.html#1099608
    for info on accessing WLDF data,
    http://e-docs.bea.com/wls/docs90/wldf_configuring/config_prog_intro.html#1043185
    for an introduction on programming using the WLDF APIs, and
    http://e-docs.bea.com/wls/docs90/wldf_configuring/appendix_query.html#1043050
    for information on the WLDF query language.
    Regards,
    Mike Cico

  • Time Machine no longer backs up after archive & install

    I have just had something mysterious happen to my Macbook - it spontaneously shut down, then wouldn't restart past a gray screen with Apple logo. After trying every troubleshooting page I could find, I ended up doing an archive and (re)install of Leopard on my hard drive. (btw, I ran the Disk Utility hard drive diagnostic, and it said the HD is fine, though I'm skeptical because it's making new noises I've never heard before.)
    I finished that install, restarted, everything seemed fine, and I re-connected the external hard drive that had been used for my Time Machine backups before. Now, every time it attempts to back up, it's stuck on "preparing" for 30 minutes, then starts copying SLOWLY and the progress bar says "6 KB of 10 GB." Well, I have a lot more than 10 GB to back up. Then the process halts and I get an error message: "Unable to complete backup. An error occurred while copying files to the backup volume." And I can't get any more specific information about the error - can't find a log file or anything.
    I know that there's a troubleshooting article about Time Machine issues with only backing up 10 GB, but mine doesn't even get that far, and besides, this is not my first Time Machine backup - I have no other external drive big enough to transfer my older data to while I reformat this drive as the support article suggests.
    I was thinking that perhaps a system setting got messed up somehow and that maybe I should just forget this archive & install thing I did and instead do a fresh erase & install on my Macbook hard drive and then restore from Time Machine's old backup (a day or so before my Macbook started acting weird). But 1. would that make a difference? And 2. if Leopard gives me an error while backing up in Time Machine, what if I do the erase & install and I get an error trying to restore from Time Machine? That would be a nightmare.
    My Macbook has always been so reliable - I'm very confused about what's going on! Any help would be appreciated.

    Hmm; the external drive had no errors, according to Disk Utility.
    Here are the time machine logs from Console:
    1/13/08 12:14:47 AM com.apple.launchd[1] (com.apple.backupd[301]) Exited abnormally: Bus error
    1/13/08 1:10:28 AM com.apple.launchd[1] (com.apple.backupd[430]) Exited abnormally: Bus error
    1/13/08 2:21:23 AM com.apple.launchd[1] (com.apple.backupd[519]) Exited abnormally: Bus error
    1/13/08 4:12:57 AM com.apple.launchd[1] (com.apple.backupd[612]) Exited abnormally: Bus error
    1/13/08 5:22:59 AM com.apple.launchd[1] (com.apple.backupd[818]) Exited abnormally: Bus error
    1/13/08 6:24:00 AM com.apple.launchd[1] (com.apple.backupd[915]) Exited abnormally: Bus error
    1/13/08 7:24:52 AM com.apple.launchd[1] (com.apple.backupd[1012]) Exited abnormally: Bus error
    Also, I tried to repair permissions on my internal drive and I got the following error messages:
    Warning: SUID file "usr/libexec/load_hdi" has been modified and will not be repaired.
    Warning: SUID file "System/Library/PrivateFrameworks/DiskManagement.framework/Versions/A/Resources /DiskManagementTool" has been modified and will not be repaired.
    Warning: SUID file "System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/Reso urces/Locum" has been modified and will not be repaired.
    Warning: SUID file "System/Library/PrivateFrameworks/Install.framework/Versions/A/Resources/runner " has been modified and will not be repaired.
    Warning: SUID file "System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/readconf ig" has been modified and will not be repaired.
    Warning: SUID file "System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/writecon fig" has been modified and will not be repaired.
    Warning: SUID file "usr/libexec/authopen" has been modified and will not be repaired.
    Warning: SUID file "System/Library/CoreServices/Finder.app/Contents/Resources/OwnerGroupTool" has been modified and will not be repaired.
    Warning: SUID file "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/MacOS/ARDAg ent" has been modified and will not be repaired.

  • ORA-00339: archived log does not contain any redo

    Hi All,
    recently we faced 'ORA-00339: archived log does not contain any redo' issue at standby side,
    after searching on Google and from Metalink (note 30866.1 and 7197445.8 ) I find out that this is the known issue for 10g and below versions, our's is 11.2.0.3,
    Error in Alert Log :
    Errors in file /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/trace/DWPRD_pr0a_48412.trc:
    ORA-00339: archived log does not contain any redo
    ORA-00334: archived log: '/redolog2/redo/redolog3a.log'
    Errors in file /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/trace/DWPRD_pr0a_48412.trc (incident=190009):
    ORA-00600: internal error code, arguments: [kdBlkCheckError], [1], [56702], [6114], [], [], [], [], [], [], [], []
    Incident details in: /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/incident/incdir_190009/DWPRD_pr0a_48412_i190009.trc
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Slave exiting with ORA-10562 exception
    Errors in file /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/trace/DWPRD_pr0a_48412.trc:
    ORA-10562: Error occurred while applying redo to data block (file# 1, block# 56702)
    ORA-10564: tablespace SYSTEM
    ORA-01110: data file 1: '/oradata1/database/DATAFILES/system01.dbf'
    ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 2
    ORA-00600: internal error code, arguments: [kdBlkCheckError], [1], [56702], [6114], [], [], [], [], [], [], [], []
    Mon Apr 15 11:34:12 2013
    Dumping diagnostic data in directory=[cdmp_20130415113412], requested by (instance=1, osid=48412 (PR0A)), summary=[incident=190009].
    Thanks

    Hi,
    "The archived log is not the correct log.
    It is a copy of a log file that has never been used for redo generation, or was an online log being prepared to be the current log."
    "Restore the correct log file."
    Can you say, what is last changes on your database, On log files?
    Did you copies your '/redolog2/redo/redolog3a.log' log file from other ?
    Regards
    Mahir M. Quluzade

  • Convert Multiple Outlook Emails to Multiple PDF Files (Not Portfolio or Single PDF) for Archiving?

    Hi all, I am learning how to convert emails to PDF files and there is some great functionality there!!  I have not discovered how to convert multiple outlook emails into multiple PDF files (one PDF file for each email) - all at the same time (not one at a time)!!  Is there a way to do this using Acrobat X??  The purpose of this is for long-term business archiving.  When I search for an email in the archive, I do not want to pull up a portfolio containing 1000 emails or a 1000 page PDF file with multiple emails run together!!!  I want to pull up individual emails containing my search terms.  I have been searching for a way to archive emails and MS OUTLOOK .PST files are NOT the answer.  I get a lot of business emails with large attachments and I do not file my emails in separate sub-folders (by client or job).  I want to convert multiple emails (by date range) from my UNIVERSAL INBOX into multiple PDF files for long term storage (with each email being converted into its own, separate PDF file and named with the same name as the "Re: line" of the email).  This has been a HUGE problem for me....and Acrobat is sooooo close to the solution for me.  Can anyone help??  If so, will attachments be converted?  If not, is there a separate software program or add-in that I can buy that will do this??  I use MS Office 2010, Adobe Acrobat X Pro, Windows 7 64 BIT.  Thanks for your help!!

    I am a retired person and did'nt realize I already have a Adobe account, so you can scrap the entire information. Thanks for the trial anyway and have a great week.
    Frederick

  • Script to delete old folders from an archive directory

    Hi, I am new to powershell scripting, working on a script to purge older folders after the files in these folders are processed. I would want to set a limit lets say 15 days old. Any help is greatly appreciated.
    Thanks in advance...........
    Ione

    Here's something you can play with:
    $folder = 'C:\NetworkShare\Archive'
    $cutoffDate = (Get-Date).AddDays(-15)
    Get-ChildItem -Path $folder -File |
    Where { $_.LastWriteTime -lt $cutoffDate } |
    Remove-Item -WhatIf
    This will only attempt to delete the files in your specified folder. The -File switch does need at least v3 of PowerShell.
    If you're happy with the output, remove the -WhatIf switch to actually do the deletion.
    EDIT: Ah, I see I'm slow on the submit button. See above.
    Don't retire TechNet! -
    (Don't give up yet - 13,225+ strong and growing)

Maybe you are looking for