Publications - Successful job run - View Log File

Hi everyone,
We have a growing number of Publication (based on Webi documents) that are scheduled and I want to report on Errors & Warnings.
When a Job Fails, I can use our Auditing universe and get visibility of the failure reason. When a Job is Successful but there are warning within the Log File - I don't know how to access this information bar manually having to check each Log file which is not feasible.
i.e. This snapshot of the History of a publication looks like all three publications were successful:
The actual jobs ran HOWEVER the one @ 10:00 shows a Warning when you dig into the Log file:
- There are no schedulable documents in the publication. (FBE60503)
2014-04-03 10:00:17,552 WARN  [PublishingService:HandlerPool-61] BusinessObjects_PublicationAdminLog_Instance_1633706 - [Publication ID # 1633706] - Unable to find any distributable documents in scope batch
Notifications are of no use to me here, they flag a Successful job even though a warning exits and no documents were sent.
Do you know if you can access these Publication Admin Log xxxxxxxx files and generate reports on them?
Many thanks
Gill

Hi RC,
The issue is not the reports failing, the issue is around reporting on such failures.
Do you know if you can access these Publication Admin Log xxxxxxxx files and generate reports on them?
With thanks
Gill

Similar Messages

  • GlassFish View Log File opens in editor instead of console

    How to reproduce:
    Open the Servers view, and bring up the context menu for a GlassFish server
    Select GlassFish > View Log File
    Actual: Log file is opened in an editor. Every time the file is changed, a popup is shown to ask whether the contents should be reloaded (unusable)
    Expected: Log file is opened in Console view and provides the usual reloading and scrolling behavior
    Eclipse: 4.3.0 (Kepler) (Platform 4.3.0.I20130605-2000)
    GlassFish Tools: 6.0.3.201306271729
    The question is: Can this bug please be fixed?

    I've noticed that search is sometimes quirky in the console, yes. But that always felt like a bug in platform. Other than searching, usability of the console is superior: Automatically scrolling, Clear Console, Scroll Lock, Show Console when output changes, being in a view instead of an editor.
    The editor is so bad that some people here refuse to upgrade to the new plugin version, instead installing the Indigo version from old workspaces.
    How about just offering both in the context menu? They could be named something like View Log in Console and View Log File in Editor.

  • Is it possible to create materialized view log file for force refresh

    Is it possible to create materialized view log file for force refresh with join condition.
    Say for example:
    CREATE MATERIALIZED VIEW VU1
    REFRESH FORCE
    ON DEMAND
    AS
    SELECT e.employee_id, d.department_id from emp e and departments d
    where e.department_id = d.department_id;
    how can we create log file using 2 tables?
    Also am copying M.View result to new table. Is it possible to have the same values into the new table once the m.view get refreshed?

    You cannot create a record as a materialized view within the Application Designer.
    But there is workaround.
    Create the record as a table within the Application Designer. Don't build it.
    Inside your database, create the materialized with same name and columns as the record created previously.
    After that, you'll be able to work on that record as for all other within the Peoplesoft tools.
    But keep in mind do never build that object, that'll drop your materialized view and create a table instead.
    Same problem exists for partitioned tables, for function based-indexes and some other objects database vendor dependant. Same workaround is used.
    Nicolas.

  • View log file via third party tool

    Hi All,
    Using Oracle EBS 11.5.10.2
    Database 10g
    How can I show the oracle log file through third party tool like Java.
    PS
    Edited by: PS on May 28, 2012 9:21 AM

    Hi,
    Once a concurrent request is completed, Log and Output file is created and details can be fetched using below query,
    SELECT OUTFILE_NAME,LOGFILE_NAME
    FROM fnd_concurrent_requests
    WHERE request_id = <Your Request ID>
    This will tell you the file name along with completed path.
    You can copy this files to other locations and need third party tool to read this files.
    I hope this helps, how to view log files from other tools.
    Regards,
    Saurabh

  • Creation of materialized view with view log file for fast refresh in 10.1db

    Hi,.. I have a select statements that includes data from almost 20 tables and takes long time to complete..I am planing to create a materialized view on this.. would you please suggest best way of doing this?
    we would like to have materialized view and materialized log file to refresh changes from underline table to mv view. please provide help on this .. thanks in advance

    It will be possible to create a Materialised view with up to 20 tables, but you have to understand the restrictions on complex Materialised views with regards to fast refresh.
    To help your understanding, refer to Materialized View Concepts and Architecture
    <br>
    Oracle Database FAQs
    </br>

  • How to save all event viewer log files in Windows 7 Professional

    Hello,
    I would like to save all Event Viewer logs from my Windows 7 Professional computer and be able to view them from another computer.  Currently I can only save one log at a time.  Please let me know how I can save all Event Viewer logs
    (Windows Logs, Applications and Service Logs, etc.).
    Thanks,
    Jason

    Hi Jason,
    There is no idea to save all categories log.
    It's recommend you ask in Official Scripting Guys forum for further help:
    http://social.technet.microsoft.com/Forums/en-US/home?forum=ITCG
    Besides that, this thread could be referred:
    http://social.technet.microsoft.com/Forums/en-US/d66c1bd7-0e61-4839-a5f6-cbe29661dccb/how-to-use-script-saving-log-from-event-viewer-into-csv-file?forum=ITCG
    Karen Hu
    TechNet Community Support

  • Crash when trying to view log files

    So i noticed my system time had stopped syncing and wanted to find out why. I went into ksystemlog and tried to view errors.log. Then my system crashed. I then killed all processes with the magic sysrq key and ended up in tty. I tried to view some logs in there too using vi, but nothing appeared and i had to kill all processes again. Then i checked out my system monitor when i was back in kde and tried $vi /var/log/errors.log and noticed that vi eats up all the cpu and ram when doing this. I also tried with nano and it did the same. Luckily i was able to kill them before i was oom.
    The irony of this, i can't find anything from google under "system log crash". And i can't get into my logs to find out what is going on. The only thing i've changed lately is backgrounding "crond fam & kdm".
    Any ideas fellars?
    Oh i'm using arch-64 with kdemod 4.2.
    Last edited by Mountainjew (2009-03-28 21:22:37)

    Try a virtual console terminal.
    Let's see what is there at all: ls -l /var/log
    You might probably go root: su -
    Then use less to review the error log in question.

  • Export of report/query to a file for background job runs

    I have a requirement to automate the export of a report or query to a file  so when a background job runs a  new file is generated to the drive. 
    This cannot be a customized solution for 1 report, it must be able to adapt to other reports or queries as well.
    Any help will be highly appreciated.
    Thank you,
    Thamina

    Hi,
    SAP have created a standard program RSTXPDFT4 to convert your Sapscripts spools into a PDF format.
    Specify the spool number and you will be able to download the sapscripts spool into your local harddisk.
    It look exactly like what you see during a spool display.
    Please note that it is not restricted to sapsciprts spool only.  Any reports in the spool can be converted using the program 'RSTXPDFT4'.
    Regards,
    Pavan

  • Logical Standby 'CURRENT' log files

    I have an issue with a Logical Standby implementation where although everything in the Grid Control 'Data Guard' page is Normal, when I view Log File Details I have 62 files listed with a status of 'Committed Transactions Applied'.
    The oldest (2489) of these files is over 2 days old and the newest (2549) is 1 day old. The most recent applied log is 2564 (current log is 2565).
    As for the actual APPLIED_SCN in the standby, it's greater than the highest NEXT_CHANGE# for newest logfile 2549. The READ_SCN is less than the NEXT_CHANGE# of the oldest log (2489) appearing in the the list of files - which is why all these log files appear on this list.
    I am confused why the READ_SCN is not advancing. The documentation states that once the NEXT_CHANGE# of a logfile falls below READ_SCN the information in those logs has been applied or 'persistently stored in the database'.
    Is it possible that there is a transaction that spans all these log files? More recent logfiles have dropped off the list and have been applied or 'persistently stored'.
    Basically I'm unsure how to proceed and clean up this list of files and ensure that everything has been applied.
    Regards
    Graeme King

    Thank you Larry. I have actually already reviewed this document. We are not getting the error they list for long running transactions though.
    I wonder if it is related to the RMAN restore we did where I restored the whole standby database while the standby redo log files were not obviously restored and therefore were 'newer' than the restored database?
    After I restored I did see lots trace files with this message:
    ORA-00314: log 5 of thread 1, expected sequence# 2390 doesn't match 2428
    ORA-00312: online log 5 thread 1: 'F:\ORACLE\PRODUCT\10.2.0\PR0D_SRL0.F'
    ORA-00314: log 5 of thread 1, expected sequence# 2390 doesn't match 2428
    ORA-00312: online log 5 thread 1: 'F:\ORACLE\PRODUCT\10.2.0\PR0D_SRL0.F'
    I just stopped and restarted the SQL apply and sure enough it was cycled through all the log files in the list from the (READ_SCN onwards) but they are still in the list. Also there is very little activity on this non-production database.
    regards
    Graeme

  • BODI -LOG file

    ************Starting at 2/18/2008 12:43:03 PM************
    LightsOff mode with cmd line: "intRopak.exe amp/amp@kaapps2 ICSFile_02182008.dat"
    Starting in LightsOff mode with instructions to load and process "ICSFile_02182008.dat".
    Loading data from D:\Feeds2\incoming\ICSFile_02182008.dat
    Successfully loaded all the data records (396)
    0 new Divisions added.
    0 new Departments added.
    0 new Locations added.
    1 new Employees added.
    394 OLD Employees Updated.
    1 new records added to EeCalendar.
    394 OLD record Updated in EeCalendar.
    1 records skipped.
    0 Employees not on this feed and also not marked either as "Terminated" or "Disappeared" have been marked as "Disappeared"
    Transactions committed to database kaapps2
    1 New Employees added  1 New records added to EeCalendar table  394 Existing records modified
    No exception for Termination date was found
    Mail sent successfully with Discripency Report for "D:\Feeds2\Ropak\RopakDiscrepencyRpt__ICSFile_02182008.dat080218-1251.csv"
    2/18/2008 12:51:27 PM: Moved Intermediate file D:\Feeds2\incoming\ICSFile_02182008.dat to archive as D:\Feeds2\Archive\ICSFile_02182008_20080218_1251.dat
    Total Data Records Loaded - 396
                      Skipped - 1
                       NewEmp - 1
                   OldUpdated - 394
    Exiting with exitcode as 0
    Sending mail using default ini file
    PLS any one help me HOW TO PREPARE AFTER EXECUTE THE JOB I NEED LOG FILE BASED ON THIS MODEL

    Basically, you need a post script in your job to create the format you want and email it. An example of one that we do can be found at:
    http://www.forumtopics.com/busobj/viewtopic.php?t=105713

  • Reading log file

    Hi all ,
    I want to view a particular log file. Is there any transaction to view log files.Do i need basis rights for that?

    $Path="C:\Times.log"
    remove-item $Path
    Add-Content $Path '<time="11:10:58.000+000">'
    Add-Content $Path '<time="12:10:58.000+000">'
    Add-Content $Path '<time="13:10:58.000+000">'
    Add-Content $Path '<time="15:13:38.000+000">'
    Add-Content $Path '<time="16:10:58.000+000">'
    Add-Content $Path '<time="17:08:28.000+000">'
    $File=Get-Content $Path
    $StartTime=$Null
    $EndTime=$Null
    $ElapsedTime = $Null
    ForEach ($Line in $File)
    If ($Line.Contains("time="))
    $Position = $Line.IndexOf("time=")
    $TimeStr =$Line.SubString($Position+6,8)
    IF ($StartTime -EQ $Null)
    $StartTime = $TimeStr -As [System.TimeSpan]
    Else
    $EndTime = $TimeStr -As [System.TimeSpan]
    $ElapsedTime = $EndTime.Subtract($StartTime)
    "StartTime=$StartTime EndTime=$EndTime ElapsedTime=$ElapsedTime"
    $StartTime = $Null
    Gives this output
    StartTime=11:10:58 EndTime=12:10:58 ElapsedTime=01:00:00
    StartTime=13:10:58 EndTime=15:13:38 ElapsedTime=02:02:40
    StartTime=16:10:58 EndTime=17:08:28 ElapsedTime=00:57:30

  • Why multiple  log files are created while using transaction in berkeley db

    we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
    without transaction implementing secondary database concept the issues we are getting are as follows:-
    with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

    we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
    with transaction ...
    without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
    - Berkeley DB Programmer's Reference Guide
    in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
    - Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
    If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
    --Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Schedule report (log file)

    hi,
    Could anyone give me idea when we schedule BO jobs(reports), any log file is created?
    Regards

    Greetings,
    I am using BOE XI 3.1 SP2 FP2.4 and am trying to diagnose an intermittent issue with email notifications.
    I have attempted to find Adaptive Job Server logs, but cannot.  I see many logs, but none for that server. 
    Here is my Adaptive Job server parameter list...
    -loggingPath "C:/Business Objects/BusinessObjects Enterprise 12.0/Logging/" -objectType CrystalEnterprise.JavaScheduling -maxDesktops 0 -classpath "C:/Business Objects/common/4.0/java/lib//external/certj.jar;C:/Business Objects/common/4.0/java/lib//external/jsafe.jar;C:/Business Objects/common/4.0/java/lib//external/asn1.jar"
    Likewise, I have Auditing enabled on the Adaptive Job Server, and all of the auditing events flagged.
    Am I missing a parameter that would create the missing logs?  Or is the log just not called 'Adaptive Job Server'?
    Please point me in the correct direction... your guidance will be greatly appreciated..
    Regards,
    Dave

  • Change in Oracle Parameters and Log file size

    Hello All,
    We have scheduled DB Check job and the log file showed few errors and warnings in the oracle parameter that needs to be corrected. We have also gone through the SAP Note #830576 – Oracle Parameter Configuration to change these parameters accordingly. However we need few clarifications on the same.
    1.Can we change these parameters directly in init<SID>.ora file or only in SP file. If yes can we edit the same and change it or do we need to change it using BR tools.
    2.We have tried to change few parameters using DB26 tcode. But it prompts for maintaining the connection variables in DBCO tcode. We try to make change only in default database but it prompts for connection variables.
    Also we get check point error. As per note 309526 can we create the new log file with 100MB size and drop the existing one. Or are there any other considerations that we need to follow for the size of log file and creating new log file. Kindly advise on this. Our Environment is as follows.
    OS: Windows 2003 Server
    DB: Oracle 10g
    regards,
    Madhu

    Hi,
    Madhu, We can change oracle parameters at both the levels that is init<SID> as well as SPFILE level.
    1. If you do the changes at init<SID> level then you have to generate the SPFILE  again and then database has to be restarted for the parameters to take effect.
        If you make the changes in SPFILE then the parameters will take effect depending on the parameter type whether it is dynamic or static. You also need to generate the PFILE i.e init<SID>.ora
    2. If possible do not change the oracle parameters using the tcode. I would say it would be better if you do it via the database and it would be much easier.
    3. Well its always good to have a larger redo log size. But only one thing to keep in mind is that once you change the size of the redolog the size of the archive log also changes although the number of files will decrease.
    Apart from that there wont be any issues.
    Regards,
    Suhas

  • FTP log file generation failed in shell script

    Hi ALL,
    I am doing FTP file transfer in shell script and able to FTP the files in to corresponding directory , But when i am trying to check the FTP status through the log files then its giving problem . please check the below code.
    for file in $FILENAME1
    do
    echo "FTP File......$file"
    echo 'FTP the file to AR1 down stream system'
    ret_val=`ftp -n> $file.log <<E
    #ret_val=`ftp -n << !
    open $ar1_server
    user $ar1_uname $ar1_pwd
    hash
    verbose
    cd /var/tmp
    put $file
    bye
    E`
    if [ -f $DATA_OUT/$file.log ]
    then
    grep -i "Transfer complete." $DATA_OUT/$file.log
    if [ $? -eq 0 ]; then
    #mv ${file.log} ${DATA_OUT}/../archive/$file.log.log_`date +"%m%d%y%H%M%S"`
    echo 'Log file archived to archive directory'
    #mv $file ${DATA_OUT}/../archive/$FILENAME1.log_`date +"%m%d%y%H%M%S"`
    echo 'Data file archived to archived directory'
    else
    echo 'FTP process is not successful'
    fi
    else
    echo 'log file generation failed'
    fi
    its giving syntax error end of file not giving the exact line number , please help me on thsi
    Regards
    Deb

    Thanks for ur reply
    Actually i did a mistake in the code i wrote the following piece of code below
    ret_val=`ftp -n> $file.log <<E
    #ret_val=`ftp -n << !
    so after the tilde symbol it as again taking the '# ' as a special character so it was giving error, so i removed the second line now its working fine.

Maybe you are looking for

  • Switching between DataSocket​s accumulate​s handles

    Hi - I need to have an application alternate on a timer between two DataSocket Servers. It works Ok but dies after a few hours because it accumulates handles (as observed on Task Manager) at the rate of 12 new ones per iteration. This stops the appli

  • My Top Rated Playlist Help...

    I accidentally deleted My Top Rated, and when I go into Create Smart Playlist, I can't select a star rating, I can only choose Top rated then imput a number, and everytime i want to add a new song i have to change that number >_> Can someone help me?

  • Aperture 3.4.4 and the new 1Tb Flickr account

    Hi, I have signed for a new1Tb free Flickr account that allows uploading full size photos. When I try to publish to Flickr from my Aperture library it gives me only the option of uploading a "Web size" copy, the full size option is grayed and says "(

  • Friendly names for virtual disks?

    In a OVM 3.x environment, is it possible and/or advisable to name virtual disks to something other than the default names given by ovm? I'm trying to find ways to ease management of resources, especially when backing up and restoring virtual machines

  • Three Phones hear the call

    Hi Expert, In my project I have guest room in hotel, and I have in this room Three Cisco IP phones and I need when some one call from outside, I will be able to hear the calling in three phones, if the guest want to switch from one phone to another t