Storing of log file in A/P Server while running BDC session in SM35

Hi All,
I have issue when running BDC session in SM35.
The actual issue is
I need to store of log file generated while running BDC session in <b>SM35</b> in <b>Application/Presentation</b> Server path.
When ever we run single session the Log file regarding that session we need to store in Application/Presentation Server.
Can anybody have solution for this issue.
Thanks in advance.
Thanks & Regards,
Rayeez.

Hi
See the std report RSBDC_ANALYSE, here you can know how to find out the log of B.I..
You can create a program like that to load the log into file instead of showing it.
Max

Similar Messages

  • Will RMAN delete archive log files on a Standby server?

    Environment:
    Oracle 11.2.0.3 EE on Solaris 10.5
    I am currently NOT using an RMAN repository (coming soon).
    I have a Primary database sending log files to a Standby.
    My Retention Policy is set to 'RECOVERY WINDOW OF 8 DAYS'.
    Question: Will RMAN delete the archive log files on the Standby server after they become obsolete based on the Retention Policy or do I need to remove them manually via O/S command?
    Does the fact that I'm NOT using an RMAN Repository at the moment make a difference?
    Couldn't find the answer in the docs.
    Thanks very much!!
    -gary

    Hello again Gary;
    Sorry for the delay.
    Why is what you suggested better?
    No, its not better, but I prefer to manage the archive. This method works, period.
    Does that fact (running a backup every 4 hours) make my archivelog deletion policy irrelevant?
    No. The policy is important.
    Having the Primary set to :
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBYBut set to "NONE" on the Standby.
    Means the worst thing that can happen is RMAN will bark when you try to delete something. ( this is a good thing )
    How do I prevent the archive backup process from backing up an archive log file before it gets shipped to the standby?
    Should be a non-issue, the archive does not move, the REDO is transported and applied. There's SQL to monitor both ( Transport and Apply )
    For Data Guard I would consider getting a copy of
    "Oracle Data Guard 11g Handbook" - Larry Carpenter (AKA Dr. Paranoid ) ISBN 978-0-07-162111-2
    Best Oracle book I've read in 10 years. Covers a ton of ground clearly.
    Also Data Guard forum here :
    Data Guard
    Best Regards
    mseberg
    Edited by: mseberg on Apr 10, 2012 4:39 PM

  • Downloading .xml/.tx data to presentation server while running background.

    Hi experts,
    I have a requiremnt to downloading .xml/.txt data to presentation server while running background.
    when i run the program in foreground with use of GUI_upload/ Gui_download its working fine but not working in background.
    i can't use email/data download to database file and than get it  option as data can be huge.
    Can anybody help me out regarding this.
    Thanks
    Anuj jain

    Hi anjui,
    it'snt possible to download a file in background using gui_download.
    You could try to create a Shell Command (transaction SM69) to transfer data in a shared directory.
    You can execute Os Commands using function module SXPG_COMMAND_EXECUTE
    Alessandro

  • After deploy into weblogic server while running the application-404 Error

    Hi All,
    Created an ADF application and Deployed this application to Weblogic server.
    I am getting the below error After deploy into weblogic server while running the application
    I am able to run this application well in JDeveloper using the IntegratedWebLogicServer.
    The Application is successfully deployed to the Web Logic server.
    While creating the domain, I have extended the Oracle JRF classes.
    Error
    Error 404--Not Found
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    10.4.5 404 Not Found
    The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.
    If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.
    JDeveloper Version : 11.1.1.3.0
    Weblogic Version : 10.3.3.0
    Thanks
    Satish

    On the Deployments look for the Testing tab.. you will see your test link to check your deployment ..
    Some times.. for some reasons( which i dontKnow :( it doesnt identify with m/c name.. repalce it with IP address..and give a try.. if its on local /c u can try 127.0.0.1 /(or local host too)

  • Stopped server while running full synchronization of SQL MA

    Hi Everyone,
    I am currently facing an issue in the Sync server where the Full Sync is showing "Stopped server" while running Full Synchronization of SQL MA and this is not happending regularly as it is showing the error message 3 times if it runs 10 times in
    a week and rest 7 times its running fine.
    What could be the reason why this is occuring?
    Your response will be highly appreciated
    Thanks,
    Aman

    Hi Nosh,
    My first profiler is FI & FS then i am running FS in which i am facing this issue of stopped server and second thing is that the above thing is running absolutly fine with ILM but in FIM its shwoing this error and this error is not permanent as 
    it is failing two times then third time it is running perfectly.
    Please suggest
    Thanks,
    Aman

  • How has access.log file configured in WebLogic server 10.0?

    1.) I am using BEA Weblogic 10.0 and my access.log is not getting updated.
    2.) I also need any information as to how this Webblogic server forms chunks (ex....access00011.log,access00012.log) because i have a software called AWStats which merges all these chunks into 1 single access.log file under its subdirectory.
    3.) I also need information as to how and where the user can specify/ form his own fields which gets displayed in the access.log
    FYI i have 2 servers and i checked under Logging->HTTP->advanced, in both the servers options and configurations are same but in 1 it works fine and access.log is updating but not in the other one.
    Kindly let me know i you have any leads into this issue!
    Thanks,
    Varun

    Hi Ravish,
    Firstly thanks for the reply.
    1.) -----
    What you can do is to set the buffer-size-kb parameter value to "0" in config.xml so that it can start logging once the server starts coming up rather then waiting for the default size which is 8kb to pass.
    Something like below:
    <web-server-log>
    <buffer-size-kb>0</buffer-size-kb>
    <web-server-log>
    For more details check the below link:
    Search for: CR302493
    http://download.oracle.com/docs/cd/E11035_01/wls100/issues/known_resolved.html
    --- for this issue i had browsed throught the forum before posting but in my conf file i have something like this instead of <buffer-size-kb>0</buffer-size-kb>
    <web-server>
    <web-server-log>
    <number-of-files-limited>false</number-of-files-limited>
    <log-file-format>extended</log-file-format>
    </web-server-log>
    </web-server>
    So how do i go about the path of debugging now??
    2.) -------
    If you do not want rotation of access.log then you can just disable it from the below console path just by putting Rotation type as None
    Server -> <YOUR_SERVER_NAME> -> Logging (tab) -> HTTP (sub-tab) -> Rotation type: None
    ---- for this in both my servers i have the settings like this,
    Rotation type--> By Size
    Rotation File size 5000
    Begin rotation time 00:00
    rotation interval 24
    files to retain 7
    and Log file rotation directory is left blank (to get created in same directory)
    and also Rotate log file on startup is unchecked.
    so??? what do you suggest!?
    3.) ------
    I also need information as to how and where the user can specify/ form his own fields which gets displayed in the access.log
    ---- regarding this, in my main server the access.log is getting updated and after 4.8Mb its creating 5Mb chunks. So, for example if the entire log is of 15 Mb then access.log stops updating at 4.98Mb and accesslog.out0001 and accesslog.out0002 is created with 5Mb each but the latest entry will be stored in accesslog.out0002 file. I hope i didn't complicate this :)
    Regards,
    Varun

  • Steps to move Data and Log file for clustered SQL Server

    Hi guys 
    we have Active'passive SQL 2008R2 cluster environment.
    looking for steps to move Data and log files from user Database  and System Database for  SQL Server Clustered Instance. 
    Currently Data and log  files resides on same drive for user and system Databases..
    Thanks
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Try the below link
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/468de435-3432-45c2-a50b-23519cd2686e/moving-the-system-databases-in-a-sql-cluster?forum=sqldisasterrecovery
    -Prashanth

  • '-1' bytes in log file - iPlanet Web Proxy Server 3.6

    I'm running iPlanet Web Proxy Server 3.6, and getting strange results in log file using extended format. Where the number of bytes should be (c1 - the content-length sent to the client by the proxy).
    I regularly get a '-1' instead of the number of bytes. Anyone tell me where this is coming from and how to stop it?

    Someone in the Web Proxy Server forum might. I guess you accidentally posted in the Web Server forum. However, if your question is time- or business-critical, you should probably contact Sun directly: http://www.sun.com/support

  • Log File Issue In SQL server 2005 standard Edition

    We have database of size 375GB .The data file has 80 GB free space within .When trying to rebuild the index we had 450 GB free space on the disk where Log file is residing.The rebuild index activity failed due to space issue.added more space and got the
    job done successfully
    The Log file has grow up to 611GB to complete the rebuild index.
    version :SQL server 2005 Standard Edition .Is ther a way to estimate the space required for rebuild index in this version.
    I am aware we notmaly allocate 1.5 times of data file.But in this case It was totaly wrong.
    Any suggestion with examples would be appreciated.
    Raghu

    OK, there's a few things here.
    Can you outline for everybody the recovery model you are using, the frequency with which you take full, differential and transaction log backups.
    Are you selectively rebuilding your indexes or are you rebuilding everything?
    How often are you doing this? Do you need to?
    There are some great resources on automated index maintenance, check out
    this post by Kendra Little.
    Depending on your recovery point objectives I would expect a production database to be in the full recovery mode and as part of this you need to be taking regular log backups otherwise your log file will just continue to grow. By taking a log backup it will
    clear out information from inactive VLF's and therefore allow SQL Server to write back to those VLF's rather than having to grow the log file. This is a simplified version of events, there are caveats.
    A VLF will be marked as active if it still has an open transaction in it or there is a HA option that still requires that data to be available as that data has not been copied to another node yet.
    Most customers that I see take transaction log backups every 15 - 30 minutes, but this really does depend upon how much data your company can afford to lose. That's another discussion for another day.
    Make sure that you take a transaction log backup prior to your job that does your index rebuilds (hopefully a smart job not a sledge hammer job).
    As mentioned previously swapping to bulk logged can help to reduce the size of the amount of information logged during index rebuilds. If you do this make sure to swap back into the full recovery model straight after and perform a full backup. There are
    problems with the ability to do point in time restores whilst in the bulk logged recovery model, so you need to reduce the amount of time you use it.
    Really you also need to look at how your indexes are created does the design of them lead to them being fragmented on a regular basis? Are they being used? Are there better indexes out there that can help performance?
    Hopefully that should put you on the right track.
    If you find this helpful, please mark the post as helpful,
    If you think this solves the problem, please propose or mark it an an answer.
    Please provide details on your SQL Server environment such as version and edition, also DDL statements for tables when posting T-SQL issues
    Richard Douglas
    My Blog: Http://SQL.RichardDouglas.co.uk
    Twitter: @SQLRich

  • Read the c2 log file of the sql server using java

    Hi All,
    i want to read the c2 log file using the core java. how is it possible ? if anybody knows about this please give me the sample code to help me.
    i am also searching on net but i am not getting any result about this? so please help me to doing this task?
    awaited person

    Hi All,
    i want to read the c2 log file using the core java. how is it possible ? if anybody knows about this please give me the sample code to help me.
    i am also searching on net but i am not getting any result about this? so please help me to doing this task?
    awaited person

  • Log file shrinking in SQL server

    Hi,
    I have a log file with initial size of 80 GB in C drive.
    Now we are having space issue in C drive, so i tried to shrink the log file, but its not reducing below initial size.
    Is it the default behavior or i missed anything while shrinking ?
    I used DBCC SHRINKFILE option to shrink it.
    How can i change the initial size of the log file ?
    If it has been set as 80 GB , because of that am i not able to free space in C drive ?
    Thanks,
    Vinodh Selvaraj

    Hello,
    Please check first the log reuse wait state of the databases, may you have to run an additional log backup before you can shrink the log file
    select name, log_reuse_wait_desc
    from sys.databases
    order by name
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Uploading a file using a php script while running application with LCDS

    Hi! I developping an application under Flex 2 / Java -
    running on LCDS / JRun server.
    I'm trying to add uploading capabilities. I'm using a php
    script for the upload part.
    First, I just try to put script on the app directory. Doesn't
    work.
    After that I set up an apache server from where I put a small
    web site with my script. It's telling me that my file is
    succesfully uploaded, but I can't find the file. The apache log
    give me no error.
    Someone can help me?

    Originally, I had problems w/ the file being placed in
    C:/whatever.ext b/c I wasn't using relative paths.
    This is the code I use:
    $MAXIMUM_FILESIZE = 1024 * 1024 * 2; // 2MB
    $newFileLoc = "./wherever/file.jpg"
    if ($_FILES['Filedata']['size'] <= $MAXIMUM_FILESIZE) {
    move_uploaded_file($_FILES['Filedata']['tmp_name'],
    "./temporary/".$_FILES['Filedata']['name']);
    rename( "./temporary/".$_FILES['Filedata']['name'],
    $newFileLoc );
    chmod( $newFileLoc, 0777 );
    Modified from this article by Adobe:
    http://livedocs.adobe.com/flex/201/html/wwhelp/wwhimpl/common/html/wwhelp.htm?context=Live Docs_Book_Parts&file=17_Networking_and_communications_173_6.html

  • How to access files from 2nd hard drive while running Mountain Lion

    Not sure the best way to word this and if there is a forum with and answer already I apologize. I recently Upgraded to an SSD split in 2 partitions one with Windows 8 the other half running mountain lion. I replaced the superdrive with the old hard drive that has Snow Leopard and all my documents. The old SATA HD appears and the file structure is visible but I cannot access any files. I recieve an access denied message. ANy help would be appreciate it. Also secondary issue and less important is due to limited space on the SSD I would like to move the user data to the old HD so it would boot ML from SSD but pull and save all downloads, documents, music etc onto the old SATA HD.

    When you get the drive permissions worked out, and the home directory duped from the SSD to the other HDD, there is a way to tell the OS where the new home directory resides.
    System Preferences > Users & Groups
    Unlock the preference pane. Now right click on your User name ( Admin). The pop-up will say Advanced Options ...
    Home Directory: Choose...  and pick the new HDD home directory location.
    Lock the panel down again.
    Done.

  • TP return code - Different between SE01 and AIX log file

    Hello all,
    Since 2-3 days our transports (done manually on AIX level) on a test system, give a return code 8 in the log file on the unix server while when we check SE01 the transport log has a return code 0 and everything is imported successfully.
    The script on unix is as follows:
    tp addtobuffer $NAME ST2 pf=TP_DOMAIN_SM2.PFL
    tp import $NAME client=999 ST2 pf=<Profile_name> > $LOG
    I have searched in SAP Net  and nothing found.
    Any idea?
    Rgds,
    Loukas

    This is also the log from the transport request.<SID>  where the difference in the error code is apparent:
    1 ETP199X######################################
    1 ETP156 GENERATION OF REPORTS AND DYNPROS
    1 ETP101 transport order     : "SM2K941036"
    1 ETP102 system              : "ST2"
    1 ETP108 tp path             : "tp"
    1 ETP109 version and release : "340.16.37" "640"
    1 ETP198
    1 EPU141 Generation of programs and screens for transport request "SM2K941036"
    4 EPU142      on application server "sapbrp03"
    1 EPU143 Only generates programs with LOAD versions
    1 EPU145 Start on "26.06.08" at "08:22:46"
    1 EPU144 -
    2 EPU146XGeneration of the transported programs
    2 EPU144 -
    3 EPU165 Program "ZSCO_INTERFACE_MERCUR" successfully generated
    4 EPU153 Database COMMIT executed
    2 EPU144 -
    2 EPU147XGeneration of the users of the transported Includes
    2 EPU144 -
    2 EPU144 -
    1 EPU149 Ended on "26.06.08" at "08:22:47"
    1 EPU150 No. of programs  /Min/Avg/Max (sec): "1"   "1" "1" "1"
    1 EPU144 -
    4 EPU153 Database COMMIT executed
    1 ETP156 GENERATION OF REPORTS AND DYNPROS
    1 ETP110 end date and time   : "20080626082247"
    1 ETP111 exit code           : "0"
    1 ETP199 ######################################

  • How can I change the log file path instead of storing log in server.log

    Hi,
    I have created one domain and modified the attribute "log-file" in element "virtual-server" element to point to the new log file path.
    But when I start up my domain, it still saying: "Log redirected to DOMAIN_LOCATION/logs/server.log.
    Why?
    Why it doesn't log to the file I specify? How can I change it?
    Thanks.
    Ken

    I have changed the logging option to my specific path in admin console as what you said. I have also changed the logging in domains attribute too.
    But still there is some logging info in domains/<dom_name>/logs/server.log instead of the path and file I specified.
    Is it possbily related to linux user role setting? the Sunone AS is installed and configured by root user. But the domain is created for another user, hence I want to forward all logging info to that user's home path.

Maybe you are looking for