Reshipping the log files to logical standby

Hi All
I am getting this error in the logical standby "ORA-01291: missing logfile" . The problem I had some wrong configuration in "standby_archive_dest" parameter in the logical database but I fixed it and now I need to reship the logfiles from primary to logical standby. I am using ASM in the primary and logical standby as well and I did NOT turn on the flash back in both side. Is there any way for me to reship the log files from the primary to the logical standby ?
Thanks

user12302159 wrote:
Hi All
I am getting this error in the logical standby "ORA-01291: missing logfile" . The problem I had some wrong configuration in "standby_archive_dest" parameter in the logical database but I fixed it and now I need to reship the logfiles from primary to logical standby. I am using ASM in the primary and logical standby as well and I did NOT turn on the flash back in both side. Is there any way for me to reship the log files from the primary to the logical standby ?
ThanksIf you have set FAL_CLIENT and FAL_SERVER correctly, then the missing archives should be transferred automatically to standby database.
http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10726/appconfig.htm#g635923
Regards,
S.K.

Similar Messages

  • Getting the error while transporting the log file

    Hi,
    I have the primary and physical standby setup on my pc. i want to transport the log file by using the command on primary database
    alter system archive current log;
    alter system swicth archive logfile;
    i am recieving error not able to find the log sequence 15 #.
    Thanks
    vj0011590

    OCI requires a native library.
    Native libraries must be in the shared library path of the application (this is a feature of the OS not java.)

  • Will RMAN delete archive log files on a Standby server?

    Environment:
    Oracle 11.2.0.3 EE on Solaris 10.5
    I am currently NOT using an RMAN repository (coming soon).
    I have a Primary database sending log files to a Standby.
    My Retention Policy is set to 'RECOVERY WINDOW OF 8 DAYS'.
    Question: Will RMAN delete the archive log files on the Standby server after they become obsolete based on the Retention Policy or do I need to remove them manually via O/S command?
    Does the fact that I'm NOT using an RMAN Repository at the moment make a difference?
    Couldn't find the answer in the docs.
    Thanks very much!!
    -gary

    Hello again Gary;
    Sorry for the delay.
    Why is what you suggested better?
    No, its not better, but I prefer to manage the archive. This method works, period.
    Does that fact (running a backup every 4 hours) make my archivelog deletion policy irrelevant?
    No. The policy is important.
    Having the Primary set to :
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBYBut set to "NONE" on the Standby.
    Means the worst thing that can happen is RMAN will bark when you try to delete something. ( this is a good thing )
    How do I prevent the archive backup process from backing up an archive log file before it gets shipped to the standby?
    Should be a non-issue, the archive does not move, the REDO is transported and applied. There's SQL to monitor both ( Transport and Apply )
    For Data Guard I would consider getting a copy of
    "Oracle Data Guard 11g Handbook" - Larry Carpenter (AKA Dr. Paranoid ) ISBN 978-0-07-162111-2
    Best Oracle book I've read in 10 years. Covers a ton of ground clearly.
    Also Data Guard forum here :
    Data Guard
    Best Regards
    mseberg
    Edited by: mseberg on Apr 10, 2012 4:39 PM

  • SQL Server 2012 Reorg Index Job Blew up the Log File

    We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
    a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
    because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
    a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.

    I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
    Do you grow them afterwards, or do you let the application waste time on that during peak hours?
    I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
    I would say this is highly dubious.
    Erland Sommarskog, SQL Server MVP, [email protected]
    Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
    interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
    Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
    space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
    What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
    These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
    just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
    And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
    log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
    Restoring the database will also be faster.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

  • History report error: | An Exceptional Error occurred. Application exiting. Check the log file for error 5022

    Hi all
    I've got a error msg when try to generate a report using Cisco history report tool:
    Error | An Exceptional Error occurred. Application exiting.  Check the log file for error 5022
    It only happens when choose report template: ICD_Contact_Service_Queue_Activity_by_CSQ_en_us.
    User tried samething on other PC, it working fine.
    only on user' own PC and only choose this report, error appears.
    user runing windows 7 and do not have crystal report installed
    tried reinstalled the software, doesn't work.
    also tried this: (https://cisco-support.hosted.jivesoftware.com/thread/2041254) - doesn't work
    then tried https://supportforums.cisco.com/docs/DOC-6209  - doesn't work
    attached the log file.
    thanks.

    wenqianyu wrote:From the log file:Looks like you get a Login Window.Error message showed up after username/password be enteredThere is an error in the log: Error happened in comparing UCCX version and HRC versionYou may need to do a clean uninstall, download the Historical report from the server, and install it again on the PC.Does this only happen to one PC or to every PC with this application?Wenqian 
    I have completely uninstalled the HRC, and download from server install again -- still doesn't work with exactly same error.
    this matter only happens on this PC, when user try same thing on other PC, it works.
    so i think it not relate to server or account.

  • Bursting Program Ends in Error with nothing in the Log file

    Hi All
    I have a RDF which calls a bursting program in After report trigger, the problem i'm facing is that the bursting program is completing successfully for a set of parameters and when again the program is run for the same set of parameters the bursting program is ending in error, i have checked the log file but nothing in the log file
    below is my log file
    +---------------------------------------------------------------------------+
    XML Publisher: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    XDOBURSTREP module: XML Publisher Report Bursting Program
    +---------------------------------------------------------------------------+
    Current system time is 30-SEP-2013 15:11:15
    +---------------------------------------------------------------------------+
    XML/BI Publisher Version : 5.6.3
    Request ID: 574533
    All Parameters: Dummy for Data Security=N:ReportRequestID=574532:DebugFlag=Y
    Report Req ID: 574532
    Debug Flag: Y
    Updating request description
    Updated description
    Retrieving XML request information
    Node Name:T1DEVEBSAPP1
    Preparing parameters
    null output =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574533.out
    inputfilename =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Data XML File:/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Set Bursting parameters..
    Temp. Directory:/tmp
    [093013_031118656][][STATEMENT] Oracle XML Parser version ::: Oracle XML Developers Kit 10.1.3.130 - Production
    [093013_031118657][][STATEMENT] setOAProperties called..
    Bursting propertes.....
    {user-variable:cp:territory=US, user-variable:cp:ReportRequestID=574532, user-variable:cp:language=en, user-variable:cp:responsibility=20678, user-variable.OA_MEDIA=http://t1devebsapp1.travelzoo.com:8001/OA_MEDIA, burstng-source=EBS, user-variable:cp:DebugFlag=Y, user-variable:cp:parent_request_id=574532, user-variable:cp:locale=en-US, user-variable:cp:user=SETUPUSER, user-variable:cp:application_short_name=XDO, user-variable:cp:request_id=574533, user-variable:cp:org_id=81, user-variable:cp:reportdescription=Travelzoo Invoice Print Selected Invoices-Child, user-variable:cp:Dummy for Data Security=N}
    Start bursting process..
    Bursting process complete..
    Generating Bursting Status Report..
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Output file size:
    602
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 30-SEP-2013 15:11:41
    +---------------------------------------------------------------------------+
    and the output says..
    "Error!! Could not deliver the output for Delivery channel:null "
    Wondering What might be the issue, any help on this is greatly appreciated.
    -Ragul

    Ram,
    Is this a custom or standard concurrent program?
    Was this working properly before? If yes, what changes have been done recently?
    Did you try to relink the concurrent program executable file and see if this helps? Also, you could enable trace/debug and submit the request again and see if more details are collected in the logs -- See (Note: 296559.1 - FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12).
    Regards,
    Hussein

  • Bursting Program Errors Out with nothing in the Log file

    Hi All
    I have a RDF which calls a bursting program in After report trigger, the problem i'm facing is that the bursting program is completing successfully for a set of parameters and when again the program is run for the same set of parameters the bursting program is ending in error, i have checked the log file but nothing in the log file
    below is my log file
    +---------------------------------------------------------------------------+
    XML Publisher: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    XDOBURSTREP module: XML Publisher Report Bursting Program
    +---------------------------------------------------------------------------+
    Current system time is 30-SEP-2013 15:11:15
    +---------------------------------------------------------------------------+
    XML/BI Publisher Version : 5.6.3
    Request ID: 574533
    All Parameters: Dummy for Data Security=N:ReportRequestID=574532:DebugFlag=Y
    Report Req ID: 574532
    Debug Flag: Y
    Updating request description
    Updated description
    Retrieving XML request information
    Node Name:T1DEVEBSAPP1
    Preparing parameters
    null output =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574533.out
    inputfilename =/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Data XML File:/u01/oracle/CRP2/inst/apps/CRP2_t1devebsapp1/logs/appl/conc/out/o574532.out
    Set Bursting parameters..
    Temp. Directory:/tmp
    [093013_031118656][][STATEMENT] Oracle XML Parser version ::: Oracle XML Developers Kit 10.1.3.130 - Production
    [093013_031118657][][STATEMENT] setOAProperties called..
    Bursting propertes.....
    {user-variable:cp:territory=US, user-variable:cp:ReportRequestID=574532, user-variable:cp:language=en, user-variable:cp:responsibility=20678, user-variable.OA_MEDIA=http://t1devebsapp1.travelzoo.com:8001/OA_MEDIA, burstng-source=EBS, user-variable:cp:DebugFlag=Y, user-variable:cp:parent_request_id=574532, user-variable:cp:locale=en-US, user-variable:cp:user=SETUPUSER, user-variable:cp:application_short_name=XDO, user-variable:cp:request_id=574533, user-variable:cp:org_id=81, user-variable:cp:reportdescription=Travelzoo Invoice Print Selected Invoices-Child, user-variable:cp:Dummy for Data Security=N}
    Start bursting process..
    Bursting process complete..
    Generating Bursting Status Report..
    +---------------------------------------------------------------------------+
    Start of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    End of log messages from FND_FILE
    +---------------------------------------------------------------------------+
    +---------------------------------------------------------------------------+
    Executing request completion options...
    Output file size:
    602
    Finished executing request completion options.
    +---------------------------------------------------------------------------+
    Concurrent request completed
    Current system time is 30-SEP-2013 15:11:41
    +---------------------------------------------------------------------------+
    and the output says..
    "Error!! Could not deliver the output for Delivery channel:null "
    Wondering What might be the issue, any help on this is greatly appreciated.
    -Ragul

    Can you find any details about the error from the "View Detail" button (the same window where you check the log and output files)?
    I found the Workflow logs, I am not sure what I am looking for, but i am not seeing any errors reported.The event viewer is supposed to send an email, so do you see anything in the logs that could be related?
    Thanks,
    Hussein

  • (Cisco Historical Reporting / HRC ) All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054

    Hi All,
    I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054"  when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out 
    The log file stated that there were ongoing connections of HRC with the CCX  (I am sure there isn't any active login to HRC)
    || When you tried to login the following error was being displayed because the maximum number of connections were reached for the server .  We can see that a total number of 5 connections have been configured . ||
    1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
    || Below we can see all 5 connections being used up . ||
    2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
    || Once the maximum number of connection was reached it threw an error . ||
    3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
    4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
    Current exact UCCX Version 9.0.2.11001-24
    Current CUCM Version 8.6.2.23900-10
    Business impact  Not Critical
    Exact error message  All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
    What is the OS version of the PC you are running  and is it physical machine or virtual machine that is running the HRC client ..
    OS Version Windows 7 Home Premium  64 bit and it’s a physical machine.
    . The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
    I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ? 

    We have had this "PRX5" problem with Exchange 2013 since the RTM version.  We recently applied CU3, and it did not correct the problem.  We have seen this problem on every Exchange 2013 we manage.  They are all installations where all roles
    are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
    We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth.  None of those "solutions" made any difference whatsoever.  The occurrence of the temporary error PRX5 seems totally random. 
    About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
    Most people don't ever notice the issue because remote mail servers retry the connection later.  However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
    simply fail.  Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
    Is Microsoft totally oblivious to this problem?
    PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
    JSB

  • How to add a date suffix to the log file name

    In Windows, I want to run certain commands and save the output to a logfile every day. How to add a suffix to the log file name so I can distinguish which log file for which day?
    e.g. cmd >> logfile.date

    AZ wrote:
    In Windows, I want to run certain commands and save the output to a logfile every day. How to add a suffix to the log file name so I can distinguish which log file for which day?
    e.g. cmd >> logfile.datemy best friend name is "google", refer to this [url | http://stackoverflow.com/questions/203090/how-to-get-current-datetime-on-windows-command-line-in-a-suitable-format-for-usi]
    This is what i did
    1) created a dummy file in c drive
    2) copy pasted below lines, you can play around more with the format
    set _my_datetime=%date%_%time%
    set _my_datetime=%_my_datetime: =_%
    set _my_datetime=%_my_datetime::=%
    set _my_datetime=%_my_datetime:/=_%
    set _my_datetime=%_my_datetime:.=_%3) Rename the file from dos
    ren some.txt dummy_file_%_my_datetime%.txt4) Here goes the output
    C:\dir
    dummy_file_Mon_09_20_2010_161347_21.txt
    Most of the code i copied from above url, you can tweak a little bit based on ur requirement and format.
    Regards
    Learner                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Is there a way to automatically collect the log files from a AirPort or TimeCapsule base station?

    Hi there,
    the headline says basically all: Is there a way to collect the log files from a Time Capsule 7 AirPort from time to time? They are overwritten quite soon but I want a complete log of all activities to my access point.
    The AirPort Utility says something about SNMP. Is this the way to go? Some kind of demon on my Mac that retrieves and saves the logs like every two days?
    I want to copy the access logs, I am not interested in Time Machine backup logs.
    Thanks.

    There is a way to do this via Syslog. On the Logging & Statistics panel (within the AirPort Utility), you can point the AirPort's system logs to a Syslog "server."
    This would require a "dedicated" network client to receive the logs.
    Unfortunately, setting up a Syslog server is a bit intensive initially, but is simple to operate and maintain.
    Please check out this Apple Support Communities thread: Directing Syslog message to a file
    ref: Enable an Apple Mac OS machine as a syslog server

  • MDIS failed to generate the Log file!!!

    Hello All,
    Having a issue where MDIS is not generating the log file..
    The scenario is something like this-
    The files are getting archived and the records not flowing into MDM
    Basis team says-
    2014-06-30T14:11:33.339,47083231971072,24,"[MDS=sapdpm1 Repository=REAL_ESTATE ClientSystem=MDM_REAL_ESTATE Port=Building]: Nigerian Building updates part 2 - SLKDDY.txt is empty, the file will be skipped
    But the source file was having data it was not empty(bit strange!!)
    Also its not generating the LOG to analyze
    Regards,
    Girish

    Hi Shenoy,
    Let me explain the scenario--
    User uploads the file through Portal and through FTP records resides in MDM...the issues is when i tried to import through IM it worked and i tried manually push file through Filezilla FTP it worked.
    But when we upload file through portal, the file resides in Archive and generating the message-
    2014-06-30T14:11:33.339,47083231971072,24,"[MDS=sapdpm1 Repository=REAL_ESTATE ClientSystem=MDM_REAL_ESTATE Port=Building]: Nigerian Building updates part 2 - SLKDDY.txtis empty, the file will be skipped
    But the file is having data.
    Regards,
    Girish

  • In LSMW while executing the specify file step logical file name and path.

    Hi ,
    In LSMW , while executing the specify file step, logical file name and path is mandatory field to entry, but in some of other LSMW objects, these fields are not mandatory one, i want to know is it possible for me to do hide the logical file name and path field in specify file step.
    thanks
    Md nisar

    Hi,
    For some Transactions while executing the Specify file step Logical File and Logical Path are mandatory.
    In this case Converted file will be stored in the application server. According to the specified Logical File and Logical path.
    Hope this will help you....
    Regards,
    Tirumala Reddy

  • How to create the log file in remote system using log4j.

    Hi,
    How to create the log file in remote system using log4j. please give me a sample code or related links.The below example i used for create the log file in remote system but it return the below exception.Is there any authandication parameter for accessing the remote path? Please help.
    public class Logging
    Logger log=null;
    FileAppender fileapp=null;
    public Logging(String classname)
    try
    log = Logger.getLogger(classname);
    String path=" [\\192.168.0.14\\c$\\LOG\\d9\\May_08_2008_log.txt|file://\\192.168.0.14\\c$\\LOG\\d9\\May_08_2008_log.txt]";
    fileapp = new FileAppender(new PatternLayout("%r [%t] %-5p %c %x - %m%n"),path, true);
    log.addAppender(fileapp);
    log.info("Logger initilized");
    }catch(Exception ex)
    ex.printStackTrace();
    java.io.FileNotFoundException: \\192.168.0.14\c$\LOG\d9\May_08_2008_log.txt (The network path was not found)
    at java.io.FileOutputStream.openAppend(Native Method)
    at java.io.FileOutputStream.<init>(Unknown Source)
    at java.io.FileOutputStream.<init>(Unknown Source)
    at org.apache.log4j.FileAppender.setFile(FileAppender.java:290)
    at org.apache.log4j.FileAppender.<init>(FileAppender.java:109)
    at annwyn.logger.BioCapLogger.<init>(Logging.java:23)
    at sun.applet.AppletPanel.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    Please help.
    Thanks in advance.
    Saravanan.K

    Sorry path is missing for the above request.
    path="\\192.168.0.14\c$\LOG\d9\May_08_2008_log.txt ";
    please help.
    Saravanan.K

  • Unable to debug the Data Template Error in the Log file

    Hi,
    I am unable to debug the log file error message Please can anybody explain me in detail where the error lies and how to solve the error.The log file shows the following message.
    XDO Data Engine ver 1.0
    Resp: 50554
    Org ID : 204
    Request ID: 2865643
    All Parameters: USER_ID=1318:REPORT_TYPE=Report Only:P_SET_OF_BOOKS_ID=1:TRNS_STATUS=Posted:P_APPROVED=Not Approved:PERIOD=Sep-05
    Data Template Code: ILDVAPDN
    Data Template Application Short Name: CLE
    Debug Flag: Y
    {TRNS_STATUS=Posted, REPORT_TYPE=Report Only, PERIOD=Sep-05, USER_ID=1318, P_SET_OF_BOOKS_ID=1, P_APPROVED=Not Approved}
    Calling XDO Data Engine...
    java.lang.NullPointerException
         at oracle.apps.xdo.dataengine.DataTemplateParser.getObjectVlaue(DataTemplateParser.java:1424)
         at oracle.apps.xdo.dataengine.DataTemplateParser.replaceSubstituteVariables(DataTemplateParser.java:1226)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:398)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:281)
         at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:251)
         at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:192)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:222)
         at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:334)
         at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:236)
         at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:272)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:148)
    Start of log messages from FND_FILE
    Start of After parameter Report Trigger Execution..
    Gl Set of Books.....P
    Organization NameVision Operations
    Entering TRNS STATUS POSTED****** 648Posted
    end of the trns status..687 Posted
    currency_code 20USD
    P_PRECISION 272
    precision 332
    GL NAME 40Vision Operations (USA)
    Executing the procedure get format ..
    ExecutED the procedure get format and the Result..
    End of Before Report Execution..
    End of log messages from FND_FILE
    Executing request completion options...
    ------------- 1) PUBLISH -------------
    Beginning post-processing of request 2865643 on node AP615CMR at 28-SEP-2006 07:58:26.
    Post-processing of request 2865643 failed at 28-SEP-2006 07:58:38 with the error message:
    One or more post-processing actions failed. Consult the OPP service log for details.
    Finished executing request completion options.
    Concurrent request completed
    Current system time is 28-SEP-2006 07:58:38
    Thanks & Regards
    Suresh Singh

    Generally the DBAs are aware of the OPP service log. They can tell you the cause of the problem.
    Anyway, how did you resolve the issue?

Maybe you are looking for

  • How to restore ipod Touch 4g after failed Jailbreak!?

    I was using Redsn0w to Jailbreak my iPod today, everything was going well, until the running codes appeared. it was stuck there for 2 minutes then, my iTouch went completely off, redsn0w detected my ipod even though it would not turn on. so I pressed

  • Apple 23" Cinema display and internal macbook speakers

    With a 23" cinema display hooked up to my macbook and the macbook's lid closed can i use the internal speakers? ps. i have a mighty mouse and wireless keyboard so i can close the lid.

  • Rowtype not fetching all the rows

    This my cursor declaration. When the Package is executed it get all records minus 1 (Assumming there are 10 rows to be processed, it ONLY processes 9 records) cursor X1 is select * from NT hdr X1%rowtype; Below is the procedure call      open X1;    

  • Printing from my samsung galaxy s3

    Hi, both me & hubby have samsung s3 galaxy phones & want to be able to print emails, etc from them using our HP3055A printer. Can anybody help ?? Thanx in advance.

  • Command + H Hide active window option disabled

    Not sure what I did, but all of a sudden I can no longer use the Keyboard Shortcut, Command + H to hide windows. When I click on the menu, like Firefox, it shows it in the list, but it's grayed out and it won't let me select it. Somehow I must have a