FinancialUtilService, downloadESSJobExecutionDetails method is not uploading ESS job output/log files to UCM.

Hi,
We are invoking financialUtilService web service using HTTP Proxy Client to upload data files and submit ESS and to get ESS job log files. We are able to successfully upload and submit ESS job. But downloadESSJobExecutionDetails is not uploading logs/out files to UCM.
          List<DocumentDetails> docDetails = financialUtilService.downloadESSJobExecutionDetails(requestId.toString(), "log");
        //  List<DocumentDetails> docDetails1 = financialUtilService.downloadExportOutput(requestId.toString());
          System.out.println("Ess Job output:" + docDetails);
          for(DocumentDetails documentDetails : docDetails){
            System.out.println("Account: "+documentDetails.getDocumentAccount().getValue());
            System.out.println("File Name: "+documentDetails.getFileName().getValue());
            System.out.println("Document Title: "+documentDetails.getDocumentTitle().getValue());
            System.out.println("DocumentName " + documentDetails.getDocumentName().getValue());
            System.out.println("ContentType " + documentDetails.getContentType().getValue());
Below output is returned:
Ess Job status:SUCCEEDED
Ess Job output:[com.oracle.xmlns.apps.financials.commonmodules.shared.financialutilservice.DocumentDetails@5354a]
Account: fin$/payables$/import$
File Name: null
Document Title: Uma Test Import
DocumentName:  84037.zip
ContentType zip

Hey
We have the same problem. On calling `downloadESSJobExecutionDetails`, a zipfile is returned, but it only contains my original upload file, not any logs. Although, if I call `downloadESSJobExecutionDetails` on a dependent child subprocess, the log is included.
I also reported this to oracle support (SR 3-10267411981) - they referred me to known bug, to be fixed in v11: https://bug.oraclecorp.com/pls/bug/webbug_edit.edit_info_top?rptno=20356187 (not public accessible though )
Is there any work around available?

Similar Messages

  • Not able to add new log file to the 11g database.

    Hi DBA's
    I am not able to add the log file i am getting error while adding the database.
    SQL> alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse;
    alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse
    ERROR at line 1:
    ORA-01505: error in adding log files
    ORA-01577: cannot add log file '/oracle/DEV/db/apps_st/data/log03a.dbf' - file
    already part of database
    SQL> select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf ACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf ACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    Kindly help me to add the new log file to my database.
    Thanks,
    SG

    Hi Sawwan,
    V$LOGMEMBER was written in the document,
    I query the log members as bellow
    1)select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf INACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf INACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    2)SQL> select group#,member,status from v$logfile;
    GROUP# MEMBER STATUS
    2 /oracle/DEV/db/apps_st/data/log02a.dbf
    2 /oracle/DEV/db/apps_st/data/log02b.dbf
    1 /oracle/DEV/db/apps_st/data/log01a.dbf
    1 /oracle/DEV/db/apps_st/data/log01b.dbf
    But i am littile bit confused that there is no group or datafile called " Group 3 and log03a.dbf" as per the above query, how can i drop tease group and datafile.
    and i crossverified in the data top the files are exist or not but those are not existing. but still i am getting the same error that i can't create that already exist.
    can issue the bellow queris to drop those group which i dont think so it will exist?
    SQL>alter database drop logfile group 3;
    Thanks in advance.
    Regards,
    SG

  • EP log type INFO not written to the portal.log file

    All the log.info (where log is of type PortalRuntime Logger) statements in my code are not written  to the portal.log file.
    I have configured the portal_logger to log ALL but it seems to log only the FATAL and WARNING messages and not the INFO ones.I am on EP6 SP2
    Thanks
    Sid

    Let me rephrase my question.
    In the portal logs configuration (Sys Adm - Monitoring - Logging Console - portal_logger ) if you select ALL, should it not log messages of all types ( ERROR WARNING INFO )? When I select ALL for the portal_logger the portal.log doesn't display the log messages of type INFO (only the once of type ERROR or WARNING).

  • SetActionListener method does not print anything in the html file...

    Hi
    I want to add the actionlistener from my Javacode, but the setActionListener method does not have any effect on the Html. I can not figure out why. Can someone tell me what I'm doing wrong?
    tnx
    Andras
    public class Links {
    private HtmlPanelGrid topLinks;
    public HtmlPanelGrid getTopLinks() {
    TopLinks links = new TopLinks();
    Class args[] = {ActionEvent.class};
    if (topLinks == null) {
    topLinks = new HtmlPanelGrid();
    }else {
    topLinks.getChildren().clear();
    MethodBinding mb =
    FacesContext.getCurrentInstance().
    getApplication().
    createMethodBinding("#{event.actionevent}", args);
    HtmlCommandLink command = new HtmlCommandLink();
    command.setValue("Link 1");
    command.setActionListener(mb);
    topLinks.getChildren().add(command);
    return topLinks;
    public void setTopLinks(HtmlPanelGrid topLinks) {
    this.topLinks = topLinks;
    generates this html:
    <body>
    <form id="_id0" method="post" action="/Test/event.faces" enctype="application/x-www-form-urlencoded">
    <table>
    <tbody>
    <tr>
    <td><a href="# onclick="document.forms['_id0'['_id0:_idcl'].value='_id0:_id2'; document.forms['_id0'].submit(); return false;">Link 1</a></td>
    </tr>
    </tbody>
    </table>
    <input type="hidden" name="_id0" value="_id0" />
    <input type="hidden" name="_id0:_idcl" /></form>
    </body>
    As you can see there is no sign of the actionListener... :(

    You may completely misunderstand something.
    Don't worry even if you can't see the actionListener in the html.
    There is one in the server side.

  • Syslogd -u not writing network messages to log file

    i'm trying to get remote logging of airport bs debug level log message to work.
    the airport bs it's doing it's part according to tcpdump:
    # tcpdump -i en1 port syslog
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on en1, link-type EN10MB (Ethernet), capture size 96 bytes
    11:56:39.332886 IP 10.0.1.1.syslog > 10.0.1.198.syslog: SYSLOG local0.notice, length: 71
    12:01:10.037655 IP 10.0.1.1.syslog > 10.0.1.198.syslog: SYSLOG local0.notice, length: 71
    12:07:57.786033 IP 10.0.1.1.syslog > 10.0.1.198.syslog: SYSLOG local0.debug, length: 100
    etc...
    config looks appropriate in /etc/syslog.conf:
    local0.* /var/log/appfirewall.log
    and the log file exists.
    the syslog launchctl file says:
    <string>/usr/sbin/syslogd</string>
    <string>-u</string>
    <string>-c 7</string>
    and ps confirms that the options are working:
    /usr/sbin/syslogd -u -c 7
    but the log messages i see arriving from the bs aren't being written to the log file and don't see a network socket for the udp receiver in the netstat -a output.
    i suspect that syslogd configuration may have been moved out of /etc and the airport help topics haven't been updated to reflect the new methods.
    any ideas or pointers?

    i found the answer. there's a section at the end of /System/Library/LaunchDaemons/com.apple.syslogd.plist that needs uncommenting to enable the udp listener.

  • OS Filesystem Copy of EM Job Output Log?

    Is there a predictable location & filename to view OEM (Database Control: dbconsole) output logs for jobs that run? I have a nightly backup that runs and I want to use a simple korn shell script to do some evaluation on the output log of the RMAN backup.
    Thanks,
    Keith

    I too was wanting to look at the output of the jobs. OEM was reporting the job status as successful, but that just meant that it was able to attempt to run my script.
    I put this together to give me more info on failures. Test an modify to meet your needs, particulary the last line.
    select E.START_TIME,D.JOB_NAME,D.JOB_DESCRIPTION,D.JOB_TYPE,C.STEP_NAME,B.STEP_ID,B.PARENT_STEP_ID,B.STEP_NAME,A.OUTPUT
    from sysman. MGMT_JOB_OUTPUT A, sysman.MGMT_JOB_HISTORY B,sysman.MGMT_JOB_EXECUTION C, sysman.MGMT_JOB D,sysman.MGMT_JOB_SCHEDULE E
    where A.OUTPUT_ID=B.OUTPUT_ID
    and B.JOB_ID=C.JOB_ID
    and C.JOB_ID=D.JOB_ID
    and E.SCHEDULE_ID=D.SCHEDULE_ID
    and rownum<10 and output like '%ORA-00942%'

  • Bug: Identity plate not added to printed output (or file)

    I recently conducted an offsite photoshoot that required proofs to be printed with contact and ordering information, etc. I set up a series of custom 2-up print templates, with the email address as a watermark over each photo, and an 4x6 PNG background image with the contact/ordering information as an identity plate (set to "Render behind image"). This works beautifully in most circumstances, however, about 20% of the time, the proofs are printed without the watermark. This is quite frustrating because it wastes ink, paper, and more importantly, time.
    I tried every combination of settings, formats, preset options, etc., and I cannot find any one particular reason for this error; and it seems to happen completely at random. In fact, if a proof prints with a missing background and I press the Print button again, it will print properly the second time (most of the time!).
    I have even tried Print to File, but this method also suffers from the same problem -- although it seems to happen less frequenly. At least with this method, I can see if the background is missing before printing.
    Note: This bug exists in both the latest release version, as well as the initial LR4 beta.
    System information:
    Dell XPS-9000, Intel Core i7 CPU 920 @ 2.67GHz, 9GB RAM, nVidia GeForce GTS 240
    Windows 7 Home Premium SP1 (x64)
    Related discussions:
    http://digital-photography-school.com/forum/post-processing-printing/126446-identity-plate -lightroom.html
    http://www.lightroomforums.net/archive/index.php/t-6979.html?s=0640f42956d2a92efdce01117ad 891d7
    http://forums.adobe.com/thread/744616
    http://www.lightroomforums.net/showthread.php?10350-Identity-plate-printing-issue-not-ALWA YS-printing
    PS: Is there a more appropriate place to log bugs?

    I tried drag and drop before.
    In trying again, made the image smaller then the max high and it works now. (note: it's been awhile since I tried this, so the machine has had several updates to the OS and several reboots, so there is a possibility of something else, like a bad driver, getting fixed that resolved this issue)

  • Log job output to file - information missing? dbcc checkdb

    Hello
    Not sure where to put this question.. feel free to move it if necessary.
    I have a job which runs DBCC CHECKDB WITH PHYSICAL_ONLY on every database on the instance which is read_write. Problem is that I want to get the output of the result to make sure that every database is actually performing this command. When viewing normal
    history by right clicking the job and select "view history" I get cut off information due to the lack of space allowed (1000 chars default I think).
    So I tried to log it to table and view it by msdb.dbo.sp_help_jobsteplog but the information here does not cover all databases as well.. So then I tried to log it to a file. But I get the same information there, and not all databases are logged.
    So I start to wonder if the dbcc checkdb job does not get executed on the other databases?? If I look in the Current SQL Server Logs I only see that DBCC CHECKDB WITH PHYSICAL_ONLY executed on the same databases that is listed in my output file.
    What can I do? the instance contains over 400 databases but only approximately 70 is logged as doing a dbcc checkdb.
    This is my command:
    SET NOCOUNT ON
    EXEC sp_MSforeachdb @command1='
    IF NOT(SELECT DATABASEPROPERTYEX(''?'',''Updateability''))=''READ_ONLY''
    BEGIN
    DBCC CHECKDB (?) WITH PHYSICAL_ONLY
    END'

    There is a known issue with sp_MSforeachdb where under heavy load the procedure can actually miss databases with no errors. That can be the case in your environment. Aaron Bertrand wrote about the issue and solutions for the problems in the article:
    Making a more reliable and flexible sp_MSforeachdb.
    Ana Mihalj

  • Background job output to file on desktop

    Hi All,
    my requirement is, query(SQ01)created with infoset and user group.
    in the query we have selection option file store, when the user gave path(can be shared drive or application server or desktop)
    and run in background file should save, which is not happening.
    currently foreground we can save the file at required location, the problem is when i run in background not able to save the file.
    logic: got the spool request, conver to itab using FM after that OPEN DATASET .......
    if any one come accros the situation, please let me know the solution.
    thank you in advance.
    Regards,
    Madhavi

    Hi,
    Please check below options "
    To open a file for reading, use the FOR INPUT addition to the OPEN DATASET statement.
    To open a file for writing, use the FOR OUTPUT addition to the OPEN DATASET statement.(If the file does not already exist, it is created automatically.)
    Try using below logic .
    lv_filename = <input file path>
    open dataset lv_filename for input in text mode encoding default.
        if sy-subrc = 0.
          do.
            read dataset lv_filename into gs_input-wa_string.
            if sy-subrc eq 0.
              append gs_input to gt_input.
            else.
              exit.
            endif.
          enddo.
          close dataset lv_filename.
        endif.
    Please revert for Further Qs.
    Thanks and Regards,
    P.Bharadwaj

  • OWB does not store errors in the log files?

    I've specified the log file on the mapping. When I execute the mapping, I see that it does create the log file i specified. However, there are no error logs in the file itself. Is there a step I'm missing. What do I need to do in order for OWB to generate the error codes into the log file?

    The description and instructions for all the different log files and audit records are in this document
    http://otn.oracle.com/products/warehouse/pdf/Cases/case10.pdf
    Nikolai Rochnik

  • RAISEERROR not shown in Agent Job Log File Viewer

    I use a RAISEERROR for a critical error in a CATCH. When the Job is run, it does fail but it gives the following message. How do i get my @Note to show in the Log File Viewer? Also, where is the log referenced 'WITH LOG'. I looked at the SQL Server Agent
    Log and did not see anything.
    Message
    Executed as user: NT AUTHORITY\SYSTEM. TCP Provider: The specified network name is no longer available. [SQLSTATE 08S01] (Error 64)  Communication link failure [SQLSTATE 08S01] (Error 64).  The step failed.
    DECLARE @Note VARCHAR(500) = 'RAISEERROR due to Critical error'
    RAISERROR (@Note, 20, 127) WITH LOG

    It says Target Local Server. I scripted out the job and proc and ran them on another SQL Server 2008 R2 and I got the expected results. Maybe the test SQL Server 2008 R2 environment I am using has some quirks (for lack of a more technical term)
    @VERSION on SQL Server where I get the  [SQLSTATE 08S01] (Error 64)
    Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Standard Edition on Windows NT 5.2 <X86> (Build 3790: Service Pack 2) (Hypervisor)
    @VERSION on SQL Server where I get expected results
    Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor)
    BELOW are the scripted out job and proc that I have been using to test the RAISERROR 
    -- scripted out job
    USE [msdb]
    GO
    /****** Object: Job [AATEST] Script Date: 12/15/2013 16:15:09 ******/
    IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'AATEST')
    EXEC msdb.dbo.sp_delete_job @job_id=N'2dd36995-fde6-491c-b4e2-85e8bdea6411', @delete_unused_schedule=1
    GO
    USE [msdb]
    GO
    /****** Object: Job [AATEST] Script Date: 12/15/2013 16:15:09 ******/
    BEGIN TRANSACTION
    DECLARE @ReturnCode INT
    SELECT @ReturnCode = 0
    /****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 12/15/2013 16:15:09 ******/
    IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
    BEGIN
    EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
    IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
    END
    DECLARE @jobId BINARY(16)
    EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'AATEST',
    @enabled=1,
    @notify_level_eventlog=0,
    @notify_level_email=0,
    @notify_level_netsend=0,
    @notify_level_page=0,
    @delete_level=0,
    @description=N'No description available.',
    @category_name=N'[Uncategorized (Local)]',
    @owner_login_name=N'FNXXX\eME', @job_id = @jobId OUTPUT
    IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
    /****** Object: Step [STEP1] Script Date: 12/15/2013 16:15:09 ******/
    EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'STEP1',
    @step_id=1,
    @cmdexec_success_code=0,
    @on_success_action=1,
    @on_success_step_id=0,
    @on_fail_action=2,
    @on_fail_step_id=0,
    @retry_attempts=0,
    @retry_interval=0,
    @os_run_priority=0, @subsystem=N'TSQL',
    @command=N'EXEC aap1test',
    @database_name=N'Store01',
    @flags=0
    IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
    EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
    IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
    EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
    IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
    COMMIT TRANSACTION
    GOTO EndSave
    QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
    EndSave:
    GO
    -- scripted out proc
    USE [Store01]
    GO
    /****** Object: StoredProcedure [dbo].[aap1test] Script Date: 12/15/2013 16:17:19 ******/
    IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[aap1test]') AND type in (N'P', N'PC'))
    DROP PROCEDURE [dbo].[aap1test]
    GO
    USE [Store01]
    GO
    /****** Object: StoredProcedure [dbo].[aap1test] Script Date: 12/15/2013 16:17:19 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE PROCEDURE [dbo].[aap1test]
    AS
    BEGIN
    RAISERROR ('************ i am here', 20, 127) WITH LOG -- bxg
    END
    GO

  • Why some photos in my Photos library did not upload to my iCloud photo library. Isn't supposed to be automatic?

    Why some photos in my Photos library did not upload to my iCloud photo library. Isn't supposed to be automatic?

    Is there something special about the file formats? iCloud Photo Library supports:
    JPEG, RAW, PNG, GIF, TIFF, and MP4.
    And no JPEG must be larger than 16GB.
    The photos will not upload, if the original image file is missing. Can you edit or export the photos, that did not upload?

  • Need help to import Job Output Collection (Repository.loadOutputCollection)

    Hi, ALL.
    I have a task to transfer all <BQYJobOutput>-objects (Hyperion workspace objects) from one environment(9.2) to another(9.3.1) using java SDK.
    Please help me to solve my problem and load <BQYJobOutput> back into workspase.
    I wrote an java apllication, so I did:
    1. I exported <BQYJobOutput> object using "saveContentsToZipByteArray" method. Now I have zip-archive on file system with ~32/.../70 files inside per one <BQYJobOutput>.
    2. I did "unzip", found proper BQYJob (job), and called:
    obj = getRepository().loadOutputCollection(
    job, EntityName, CategoryPath, arr_s_files, arr_b_primary
    , EntityBrowsable, EntityDesc, arr_s_keywords, pr_autoDelete, pr_expiration
    , pr_customProps, pr_exception, arr_s_exceptText, pr_rating);
    with
    pr_customProps.put("IS_DASHBOARD", "false");
    pr_customProps.put("sys_thin_client_view", "true");
    arr_s_files is array with all 32 full file pathes inside
    arr_b_primary array has "true" only for "primary" files ("false" for other)
    As result I got:
    - All 32 files in the folder/category.
    - New object with type "Output collection" not "Ineractive Reporting Job Output", and an empty content inside.
    Also, if you have rich documentaion/help on Repository.loadOutputCollection - please post it with details.
    Thank you in advance for any hint!
    What I have as part of java sdk:
    loadOutputCollection
    public com.sqribe.rm.JobOutput loadOutputCollection(com.sqribe.rm.Job job,
    java.lang.String name,
    java.lang.String path,
    java.lang.String[] files,
    boolean[] primary,
    boolean browsable,
    java.lang.String desc,
    java.lang.String[] keywords,
    boolean autoDelete,
    java.util.Date expiration,
    java.util.Properties customProps,
    boolean exception,
    java.lang.String[] exceptText,
    com.sqribe.rm.Rating rating)
    throws ReportMartException
    Load a job output collection in the repository. The newly created collection is associated with the specified job.
    Since:
    Version 8.0
    Thank you, Sergey

    There are always various ways to do things.  Any of the following and others will work.
    SCOTT@orcl12c> alter session set nls_date_format = 'dy dd-mon-yyyy hh24:mi:ss';
    Session altered.
    SCOTT@orcl12c> -- 1050 minutes divided by 1440 minutes in the day:
    SCOTT@orcl12c> select trunc(SYSDATE)+(1050/1440) from dual;
    TRUNC(SYSDATE)+(1050/144
    wed 06-nov-2013 17:30:00
    1 row selected.
    SCOTT@orcl12c> -- 17.5 hours divided by 24 hours in the day:
    SCOTT@orcl12c> select trunc(SYSDATE)+(17.5/24) from dual;
    TRUNC(SYSDATE)+(17.5/24)
    wed 06-nov-2013 17:30:00
    1 row selected.
    SCOTT@orcl12c> -- 17 hours + 1 half hour:
    SCOTT@orcl12c> select trunc(sysdate+1)+17/24+1/48 from dual;
    TRUNC(SYSDATE+1)+17/24+1
    thu 07-nov-2013 17:30:00
    1 row selected.

  • Job output in System 9.3 Workspace

    We're learning the ins and outs of the workspace on System 9.3, and have two issues with job output:<BR><BR>1. Can we configure the system to NOT display the job log file? We don't want end users to see this!<BR><BR>2. Can we confgure the system to NOT display the job output as a type "Interactive Reporting Document (Web Client)"??? We want our users to view the output documents in the workspace's HTML viewer, but don't want them to install or use the browser plug-in client.<BR><BR>

    Hello,
    If you could provide a little more detail it might help narrow down the issue.
    Do the user's see the document within Workspace?
    Do they get an error when trying to access the document? If so what is the error?
    Are the user's provisioned to be able to access WebAnalysis?
    Do the user's have the appropriate JRE installed to launch the WebAnalysis Applet?

  • Database Log File getting full by Reindex Job

    Hey guys
    I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
    file space.  Any suggestions?

    Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
    need to take log backup after changing back to Full recovery
    I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

Maybe you are looking for

  • Exporting Issues after Mountain Lion install.

    FCP 10.0.3 and recently upgraded to Mavericks. Getting this message when trying to go to Youtube: com.apple.Compressor.CompressorKit.ErrorDomain I don't have Apple compressor. Export won't work. It's not greyed out either. Just does nothing when I hi

  • Failure to registered in blackberry network

    Hi, I have a Q10 SQN-3 and use Korean KT 3G network with data plan. Today i restore the device system with blackberry link. Now the device OS is 10.2.0.1767. Here is how i try to register to blackberry network for BIS and BBM: Setting -> About -> thr

  • Oracle XE Startup

    Hello, I have the following problem If I tried to start the oracle xe database I got the following error message. SQL> startup ORA-09925: Unable to create audit trail file Linux Error: 13: Permission denied Additional information: 9925 Any ideas? thx

  • HT1750 Why isn't my serial number recognized?

    Tried to contact Apple Support but need to provide serial #.  I copied/pasted serial # several times but I am continually informed that the number isn't recognized.  I bought my computer on June 24th and am trying to get the OS Mountain Lion because

  • Internet directory clients

    I have installed and the server works fine. The demo shows two client interfaces through the ldap interface. One is an applet used in the demo to create a new user, the other is an internet directory gateway using servlets and displaying HTML. where