DST migration log file

Hi.
Anybody knows where to find the log file where you can see when and how a file got migrated efter a policy run.
/Lelle

On 05/02/2014 14:46, lelle wrote:
> Anybody knows where to find the log file where you can see when and how
> a file got migrated efter a policy run.
If I've understood your question correctly -
/media/nss/<volumename>/._NETWARE/<volumename>.audit.log
See http://www.novell.com/documentation/...a/bc7j22i.html
HTH.
Simon
Novell Knowledge Partner
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.

Similar Messages

  • Exchange 2010 Log files keep filling up following migration from Exchange 2003

    I am migrating from Exchange server 2003 to 2010.
    Having only moved one mailbox and set-up Public Folder replication, I noticed that the 20GB Drive allocated for the logs are filling up the entire drive, even before I have time to run my scheduled backup.
    As a temporary measure, I have enable circular logging as a workaround.
    Q/ Whilst this is not ideal, shall I leave it like this until the Public Folders are fully replicated and all mailboxes moved over?
    Q/ What risk am I exposing myself to as a result of using Circular logging (I am running full backups every night).
    Q/ Could there be another cause as to why the log files would grow so quick in such a short amount of time?

    Hello,
    Remember that logs are truncated after successful backups so if you have a lot of data replicated between backups, a lot of logs will be stored on disk.
    "Q/ Whilst this is not ideal, shall I leave it like this until the Public Folders are fully replicated and all mailboxes moved over?"
    The best option is to run backups to truncate logs. Circular Logging is useful in test and high available deployments as it can cause data loss (loss of data created between backups).
    "Q/ What risk am I exposing myself to as a result of using Circular logging (I am running full backups every night)."
    The same as mentioned above.
    "Q/ Could there be another cause as to why the log files would grow so quick in such a short amount of time?"
    Enormous logs generation can be caused by bugs in Exchange and connecting devices, i.e. iPhones can cause quick logs creation in some configurations. If you have latest Exchange 2010 build, it shouldn't be problem.
    Hope it helps,
    Adam
    CodeTwo: Software solutions for Exchange and Office 365
    If this post helps resolve your issue, please click the "Mark as Answer" or "Helpful" button at the top of this message. By marking a post as Answered, or Helpful you help others find the answer faster.

  • How can I save my Log files (and other my own files) to windows azure ?

    Hi ,Dear all
    I am migrating an Asp.Net Web App to Windows Azure. It runs in IIS
    In that Web App,it will:
      1. save log files to a virtual path (for example, save to $WEBAPP_ROOT_DIR/LOGFILES/
      2. save ViewState by session as temporary files to a virtual path (for example , save to $WEBAPP_ROOT_DIR/VIEWSTATE/
      3. upload some image files to the web server virtual path at $WEBAPP_ROOT_DIR/UPLOAD/
    so, when I migrate the web app to windows azure,how can I do the same job in my code?It seems so many great changes in my code ...
    Any ideas and suggestions for me ? many thanks!

    hi Eric,
    If you use Azure website, you can save your log and files into azure storage service ,such as blob or table.
    If you use VM to host your service, you can only attach a data disk or file service on your VM and save your data into data disk or file service.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Error while migrating repository files from Brio to Interactive reporting

    Hi,
    I am doing migration of Brio from 8.3.1 to Hyperion Interactive Reporting version 11.1.2.1. I am referring http://docs.oracle.com/cd/E12825_01/epm.111/hs_migration.pdf
    I am facing error when I am migrating resitory files from brio 8 to Interactive reporting version11 environment.
    Error [[ENOENT]] Accessing path: /c2cnas/Oracle/Hyperion_Home/EPMSystem11R1/products/biplus/opencat
    when I checked at my Linux server I do not see "opencat" directory under /c2cnas/Oracle/Hyperion_Home/EPMSystem11R1/products/biplus path. I don't know why this directory or folder was missed after Hyperion Interactive Reporting Installation. Do we need to create this folder manually for newer versions( 11.x)?
    Has anyone aware of this error? will it impact my migration? I am thinking that migration utility is looking to place .oce files in this directory but it could not find. is my understanding correct?
    Please help me on this.

    Usually in 11.1.2.1 the folder will be  "EPMSystem11R1\products\biplus\data\Open Catalog Extensions"
    I haven't seen a opencat folder.
    Can you check the migration logs.Also check permission for the user on the folders.
    Thanks,
    KK

  • Where yo find reporting services log files in sharepoint 2013 integrated mode

    i use to find the reporting services log files in the msrs folder in sharepoin 2010 integrated mode. but when we migrated to sharepoint 2013. i am not able to find the logs.i wanted to check all the executed sql statement using the logs as we don't have
    access to sql server profiler. does anybody know the location of yhe logs?

    Hi there,
    Not sure you'll see the sql statements however you can find information on the ULS logs here:
    http://technet.microsoft.com/en-us/library/cc627510(v=sql.105).aspx
    http://technet.microsoft.com/en-us/library/ff487871(v=sql.105).aspx
    and here:
    http://technet.microsoft.com/en-us/library/ms156500(v=sql.105).aspx
    You might want this once ULS log is capturing info.
    http://archive.msdn.microsoft.com/ULSViewer
    cheers,
    Andrew
    Andrew Sears, T4G Limited, http://www.performancepointing.com

  • How to configure Log file generation

    Hi,
    I am in a migration project. Currently the OS is Unix. After migration it is going to be Windows.
    So we want to change the log files being created in Unix to Windows.
    Can anyone suggest any settings in SAP for the log file.
    Regards,
    Gijoy

    Hi Gijoy,
    can you please reformulate your question for better understanding?
    The log location and tracing severity setup mechanism is platform independent.
    After migration there's no necessary step(s) to be taken, the logs will be created in the same way on windows as on  unix under your current sap installation folder (e.g. defaultTrace is on unix under /usr/sap/.../j2ee/cluster/server<n>/log , on windows this will be <DRIVE:>\usr\sap\...\j2ee\cluster\server<n>\log)
    I hope this answers your question.
    Best Regards,
    Ervin

  • Location of query log files in OBIEE 11g (version 11.1.1.5)

    Hi,
    I wish to know the Location of query log files in OBIEE 11g (version 11.1.1.5)??

    Hi,
    Log Files in OBIEE 11g
    Login to the URL http://server.domain:7001/em and navigate to:
    Farm_bifoundation_domain-> Business Intelligence-> coreapplications-> Dagnostics-> Log Messages
    You will find the available files:
    Presentation Services Log
    Server Log
    Scheduler Log
    JavaHost Log
    Cluster Controller Log
    Action Services Log
    Security Services Log
    Administrator Services Log
    However, you can also review them directly on the hard disk.
    The log files for OBIEE components are under <OBIEE_HOME>/instances/instance1/diagnostics/logs.
    Specific log files and their location is defined in the following table:
    Log Location
    Installation log                     <OBIEE_HOME>/logs
    nqquery log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    nqserver log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    servername_NQSAdminTool log      <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    servername_NQSUDMLExec log                          <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    servername_obieerpdmigrateutil log (Migration log)           <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
    sawlog0 log (presentation)                          <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
    jh log (Java Host)                               <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIJavaHostComponent\coreapplication_obijh
    webcatupgrade log (Web Catalog Upgrade)                <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
    nqscheduler log (Agents)                          <OBIEE_HOME>/instances/instance1/diagnostics/logsOracleBISchedulerComponent/coreapplication_obisch1
    nqcluster log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIClusterControllerComponent\coreapplication_obiccs1
    ODBC log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIODBCComponent/coreapplication_obips1
    opmn log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    debug log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    logquery log                               <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    service log                                    <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    opmn out                              <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
    Upgrade Assistant log                         <OBIEE_HOME>Oracle_BI1/upgrade/logs
    Regards
    MuRam

  • Robocopy Log File - Skipped files - Interpreting the Log file

    Hey all,
    I am migrating our main file server that contains approximately 8TB of data. I am doing it a few large folders at a time.  The folder below is about 1.2TB.  Looking at the log file (which is over 330MB) I can see it skipped a large number of files,
    however I haven't found text in the file where it specifies what was skipped, any idea on what I should search for?
    I used the following Robocopy command to transfer the data:
    robocopy E:\DATA Z:\DATA /MIR /SEC /W:5 /R:3 /LOG:"Z:\Log\data\log.txt"
    The final log output is:
                    Total    Copied   Skipped  Mismatch    FAILED    Extras
         Dirs :    141093    134629      6464         0         0         0
        Files :   1498053   1310982    160208         0     26863       231
        Bytes :2024.244 g1894.768 g 117.468 g         0  12.007 g  505.38 m
        Times :   0:00:00  18:15:41                       0:01:00 -18:-16:-41
        Speed :            30946657 Bytes/sec.
        Speed :            1770.781 MegaBytes/min.
        Ended : Thu Jul 03 04:05:33 2014
    I assume some are files that are in use but others may be permissions issues, does the log file detail why a file is not copied?
    TIA
    Carl

    Hi.
    Files that are skipped are files that already exists. Files that are open/permissions etc will be listed under failed. As Noah said use /v too see which files were skipped. From robocopy /?:
    :: Logging Options :
    /V :: produce Verbose output, showing skipped files.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. Even if you are not the author of a thread you can always help others by voting as Helpful. This can
    be beneficial to other community members reading the thread.
    Oscar Virot

  • BE5k to BE6k Migration tar file fails

    Hi we are upgrading from a CUCMBE5k to 6k (CUCM 9.1.2)
    Our existing server is on an HP MSC server and we purchased the Cisco UCS C220 server. We are following the below guide which outlines creating the server then import/export the tar files from the old server into the new CUCM. Because this is on a new server I'm bring the new one up in parallel then i will cut over.
    When I run the import it fails. Attached is the job schedular failier. The log files have some info however, how much do I need to manipulate this info, if I can. Has anyone run into this issue with the TAR files failing?
    Thanks,
    Mike

    There is no migration path for CUCM off of a BE5k. You must build the new CUCM cluster from scratch. If you're patient, you can use BAT Import/Export to carry over most of the data; however, it'll require a lot of massaging in Excel as the columns will be different between the export from your older version and the import on 9.1(2). Note that you can carry CXN mailboxes over using COBRAS but you must be on CXN 7.x or newer.
    If you have installed Cisco Unified Communications Manager Business Edition 5000 on an MCS-7828 server, and you decide that you need to migrate to separate Cisco Unified Communications Manager and Cisco Unity Connection environments for increased scalability and capacity, you can reuse that MCS-7828 server to run Cisco Unified Communications Manager in a MCS-7825 cluster. Although you can reuse the server, you must reenter your data on the server manually. You must also obtain another server to run Cisco Unity Connection.
    http://www.cisco.com/en/US/docs/voice_ip_comm/cucmbe/install/8_6_1/install/cmins861.html#wp795012
    Please remember to rate helpful responses and identify helpful or correct answers.

  • How to clean redo log file?

    Hi, guys:
    I need to migrate data from external tables to normal tables. But the connection is always frozen half the way of executing script. The error message is ORA-00257. it looks redo log file is full. I tried to delete redo log file with RMAN, but I got this error message:
    delete archivelog until time 'trunc(sysdate)';
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=43 device type=DISK
    specification does not match any archived log in the repository
    Any suggestion would be appreciated.
    Sam

    lxiscas wrote:
    Hi, guys:
    I need to migrate data from external tables to normal tables. But the connection is always frozen half the way of executing script. The error message is ORA-00257. it looks redo log file is full. I tried to delete redo log file with RMAN, but I got this error message:
    Sounds like you are confusing (online) redo logs with "archivelogs" (archived redo logs).
    You don't delete redo logs at all. You can delete archivelogs with rman, which is what your command is trying to do. Do you ever backup the archivelogs? A proper backup/recovery policy would backup the archivelogs on at least a daily (if not more often) basis, and delete them after they are backed up. That will keep the archivelog destination from filling up in all but the most extreme circumstances.
    delete archivelog until time 'trunc(sysdate)';
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=43 device type=DISK
    specification does not match any archived log in the repository
    Any suggestion would be appreciated.
    Sam

  • Database Log File Size

    We are in the process of migrating disabled users to a new Exchange 2013 database on secondary storage. I've noticed that the Logs folder is abnormally large (182 GB). I was wondering if there was a way to clean this up?
    We have other Exchange 2013 databases whose Logs folder is much smaller in comparison (~200 MB). How can I go about cleaning these log files?

    Hi,
    Have you checked the above suggestion to do a full backup and check the result?
    Is there any update with your issue?
    Best regards,
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Amy Wang
    TechNet Community Support
    Sorry to reply to this so late but I wanted to provide somewhat of an update. I am running a DPM job on the database now. We'll see if it resolves the issue. When I ran the DPM job originally it had an error. I believe it was because the drive was at capacity.
    I've since expanded the drive and kicked off the DPM job again.

  • Empty/underutilized log files not removed

    I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
    Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
    When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
    I have run the DbSpace utility on the environment and found the following result:
    File Size (KB) % Used
    00000033 97656 0
    00000034 97655 0
    00000035 97656 0
    00000036 97656 0
    00000037 97656 0
    00000038 97655 2
    00000039 97656 0
    0000003a 97656 0
    0000003b 97655 0
    0000003c 97655 0
    0000003d 97655 0
    0000003e 97655 0
    0000003f 97656 0
    00000040 97655 0
    00000041 97656 0
    00000042 97656 0
    00000043 97656 0
    00000044 97655 0
    00000045 97655 0
    00000046 97656 0
    This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
    2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
    2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
    2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
    2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
    2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
    2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
    2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
    2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
    Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
    2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
    2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
    2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
    2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
    2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
    2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
    2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
    2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
    2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
    2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
    2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
    2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
    2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
    2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
    I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
    Any help would be appreciated (JE version: 3.3.75).

    Hi Bertold,
    My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
    A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
    If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Scrolling SSL message in PS Unix Process Scheduler PSAESRV log file

    This message is being spammed to the AESRV log file and it's doing it's best to fill up the partition. We recently migrated to new hardware, and it looks like the problem started day 1 from there. There are no other known issues at the moment. Any idea what's causing this, how to stop it and/or how to put a muzzle on it?
    Here's the line:
    PSAESRV.15511 (2) \[08/24/09 00:06:06 <USERNAME>@<SERVER> RunAeAsync2\](2) Invalid or No 'CA' entry in SSL Config File
    $ grep "entry in SSL Config File" AESRV_0824.LOG|wc -l
    11799608
    Environment is HR89 8.47.12
    SunOS 5.9
    Edited by: user7342576 on Aug 24, 2009 1:51 PM

    Where do you upload the root certificates to the DB? I now think this issue may be arising out of some custom encryption that was done a while back and has since progressively gotten worse with higher utilization. Any idea where the root certificate would be found for a custom encryption project?

  • The LOG file \work\dev_jcontrol is not present

    The LOG file \work\dev_jcontrol is not present; even thou I have restarted the server:
    stopsap
    startsap <j2ee_instanse>
    Any idea?

    Hi,
    cluster ID is just combination of below parameters:
    In our case, my source system (ABC) was refreshed from another system (XYZ) recently
    so while installing target system ( DEF), I changed the source system details from ABC to XYZ in below file and retried the
    SAPinst screen. System copy has got completed successfully.
    Open the file <installation directory>/jmt/cluster_id_switch.properties and edit the line
    src.ci.sid=
    src.ci.instance.number=
    src.ci.instance.name=
    src.ci.host=
    If in your case source system is not refreshed recently; You may try with functional host name or OS host name etc. details for above parameters.
    If this does not work check details of "SAP Note 966752 - Java system copy problems with the Java
    Migration Toolkit" which says almost the same thing but I could not get that as the statements related to
    box number are bit confusing and contradictory.
    Cheers !!!
    Ashish

  • We have a Exchange 2013 server and the Mailbox Database folder is filling up with .log files.

    We are migrating from Exchange 2010 to Exchange 2013.  We have installed the Exchange 2013 but it only has a couple of mailboxes on this server, all the mailboxes are still on the Exchange 2010 server.
    I have run a Windows Backup of the Exchange 2013 but I am still seeing a ton a log files in the mailbox folder.
    Also the database file is only about 1.1 GB but the backup is now 40 GB.  
    Is there something that can be done to truncate these logs and make the backup smaller?

    Hi ,
    1.Does the full backup completed successfully ?
    2.what about the status for the below mentioned command ? Does the mailbox database headers updated with the latest time and date ?
    Get-MailboxDatabase -Status | ft name,*full* -au
    3.Just check the application event logs for the event id
     2046  and that should state that the log truncation for the mailbox databases has been initiated or not.
    4.Before initiating the backup just make the exchange writer is not on error.
    vssadmin list writers
    In case if it on error state ,please restart the Microsoft exchange replication service and check the exchange writers status again by using the above mentioned command.
    Thanks & Regards S.Nithyanandham

Maybe you are looking for

  • Install XP on 1 TB HD

    I have  1 TB hard Drive (Toshiba DT01ACA100), which replaces a failed HD.  I was able to install/restore my back-up of Win 7 to this HD, it runs fine.   Problem: The Win 7 installation was an upgrade from Vista on the old hard drive.  I'm running a H

  • Insert record - works in FF and Safari, but not IE

    I think this is an odd one - I have a site which uses a fairly standard form to add a record to a database. I changed the input code to have an image as the button to : <input type="image" name="New Vehicle" id="New Vehicle" value="Insert record" alt

  • Created by (submitted by) varaible

    I would like to include the created (submitted) by variable in the change management email notification. Does anyone know if this possible – I am noob to service manager so any help would be great.

  • Insert Editing, Why Won't Clips Appear In The Timeline?

    Every time I attempt an insert edit, either via the "downward arrow" symbol or via "W" on the keyboard the clip appears in a secondary time/storyline below the Timeline, with a blank space in the main Timeline of exactly the same duration as the clip

  • After ios 5 update I can't open e mail attachments on iphone 3gs

    After ios 5 update I can't open e mail attachments on my iphone 3gs and facebook application doesn't work - can anyone help please?