Transport Set Debugging Logs

Hello,
According to Metalink Note 2166087.1 debugging can be enabled for Transport sets by editting utldbmgr.pks and setting DEBUG_MODE := true and then recompiling. I did this, but I don't see any additional information in the Logs shown in the Export/Import Transport Set Portlet. Are there other Logs somewhere else? The note doesn't specify where the debug logs are. Does debugging actually work for anyone?
I need debugging because I get a Precheck Warning for the Application and I cannot import the application but don't know why.
Thanks.

Hi Snigdha,
I checked that too but there I only got directory for old month but not for current month /usr/sap/SID/JC00/j2ee/temp/pcd/transport/reports/Import/2011-02-02
Is that because I got "Access denied" for most of the object while importing ? but rest of the object status was "OK".
Thanks,
Saleem

Similar Messages

  • Importing a Transport Set

    We get the following error message (the log is clean up to that point) when we import a tranport set to our new machine.
    It is a clean export from the target and I have gone through Oracles trouble shooting documents.
    Importing Items...
    [07-01-05 11:44:49][Error: (WWU-74256)] context = (Fatal Error)WWUTL_API_IMPORT_PAGEGROUP.MergeTableForInsert user = PORTAL An unknown fatal error has occurred during the import of transport set. See the debug logs for details.
    Context: (Fatal Error)WWUTL_API_IMPORT_PAGEGROUP.MergeTableForInsert trigger: wwsbr_that_briud_trg.select wwv_subtypeattributes: No data found wwnls_api.add_string: wwnls_api.add_string: wwnls_api.add_string: wwnls_api.add_string: wwnls_api.add_string: wwnls_api.add_string: wwpth_api_private.get_page_and_page_group_id: Context
    Any ideas?

    Hi Lax,
    I have never seen this. Have you tried to get help from the Oracle Support Services via the Metalink website (http://metalink.oracle.com)?
    It seems to me that there is an ORA-1403 somewhere due to a query being done by the Portal Export / Import. Oracle Support may try to help you a bit further and even they may try to reproduce your problem.
    Also useful would be to say the Oracle Database version from both Source and Target Portal databases.
    I hope it helps...
    Cheers,
    Pedro.

  • How to set DEBUG severity for File adapter in J2SE AE

    Hi,
    Can anyone help me in setting the severity level to DEBUG for file adapter?
    I have tried to set in logging.properties file as given below,
    formatter[Trace]         = TraceFormatter
    formatter[Trace].pattern =%24d [%t] %-6s %l - %m
    log[Trace] = FileLog
    log[Trace].pattern     = swb_%g.log
    log[Trace].limit     = 2000000
    log[Trace].cnt          = 5
    log[Trace].formatter     = formatter[Trace]
    com.sap.aii.messaging.adapter.severity = DEBUG
    logging components
    #com.sap.aii.axis.logs = +log[Trace]
    #com.sap.aii.axis.severity = DEBUG
    #org.apache.axis.logs = +log[Trace]
    #org.apache.axis.severity = DEBUG
    Please correct me if I have did anything wrong and sugest me hot to set the DEBUG level as soon as possible.
    Best Regards,
    Soorya

    Hi Sejoon,
    when we changed the host of XI server in one of our projects one of the things we had to change was the exchange profile entries
    http://server:port/exchangeProfile/index.html
    to see how your adapter engine is configured
    http://server:port/sld
    SLD -> Content Maintenance
    then
    Subset: landscape description
    Class: XI Adapter Framework
    buth check the exchange profile entries first
    Regards,
    michal

  • Error/Warning Messages in Agent Debug Log Files

    I have noticed the following error message in the Agent Debug Log Files and was wondering if anybody could shed any light on their meaning or what might be causing them.
    Warning 1855:####### ServiceEngine: Service::getPolicyResult():No passwd value in session response.
    Warning 1855:####### PolicyAgent: Access Manager Cookie not found.
    Error 1855:####### AM_SSO_SERVICE: SSOTokenService::getSessionInfo(): Error 18 for sso token ID <SSO_TOKEN_ID>

    hi
    Looks like the message numbers you are processing in the function module CRM_MESSAGE_COLLECT are only for 'errors'.
    Please go to Transaction: CRMC_MSGS
    Look at the following Application Areas
    CRM_ORDER
    CRM_ORDERADM_H
    CRM_ORDERADM_I
    CRM_ORDER_MISC
    CRM_ORDER_OUTPUT
    Select one of the above and double click 'Single Messages' towards your left side on the screen. Select the relavant message number from the list and check the 'S' flag. Press 'F1' (help) on this box and you will see the following.
    Only error messages are saved in the processing log. If a message definitely should be saved - contrary to this rule - then the indicator needs to be set.
    You need to make sure that a message marked like this can be deleted again - by calling up the function module CRM_MESSAGES_DELETE in the appropriate place in the program.
    When you do these steps, please note the message 'Do not make any changes (SAP data)'. This is not tested and used from my side. Please see if this impacts other functions. I am not sure. Do necessary R&D before you proceed further. Please explore tables CRMC_MESSAGES_S and CRMC_MESSAGES and see if this helps.
    Reward if helps
    Regards
    Manohar

  • How to collect debug logs for custom module only in EBS by AppsLog.write?

    Hi,
    Customer is trying to retrieve logs for their custom module (XXKCZ) and
    put it to /log/kcz.log by using AppsLog.write() method. (refer to sample code below)
    Customer has set these profile values in the application level and removed the values
    in the site level. But the file /log/kcz.log is created but does not have any lines in it.
    Is there something wrong to the coding of AppsLog.write?
    Application Name "xxkcz"
     FND: Debug Log Enabled "Y"
     FND: Debug Log Module "xxkcz%"
     FND: Debug Log Level "UNEXPECTED"
     FND: Debug Log Filename "/log/kcz.log"
     ---- [Sample Code starts] ----
     Package jp.kaikei.oracle.apps.xxkcz.util.server;
     import oracle.apps.fnd.common.AppsContext;
     import oracle.apps.fnd.common.AppsLog;
     import oracle.apps.fnd.common.Log;
     public class CommonLog{
     public void initLog() {
     AppsContext appsCtx = new AppsContext();
     AppsLog appsLog = appsCtx.getAppsLog();
     appShortname = "xxkcz"
     message ="Message started"
     CLASS_NAME = "jp.kaikei.oracle.apps.xxkcz.util.server.CommonLog";
     String moduleName = appShortName + "." + CLASS_NAME + ".initlLog;
     appsLog.write(moduleName, message, Log.UNEXPECTED)
     ---- [Sample Code ends] ----
    Any advise?
    Thanks.

    May be some initialization is missing,
    I have created my own writetoFile method,
    if you want you can try this,
    * Method to write the debug messages into a File
    public void writeFile(String debugMsg, String userName)
    String logDir = (String)this.getOADBTransaction().getProfile("XXPO_OA_PDT_PDT_DEBUG_DIR");
    BufferedWriter out;
    //DateFormat dateFormat = new SimpleDateFormat("dd-MM-yyyy HH:mm:ss");
    Date date = new Date();
    // if(logDir ==null)
    // logDir = "/ebsgld/ebsgldcomn/temp/";
    String fileName = logDir+userName+"-pdt_log.log";
    //createDiag(XXPO_APP_SHORT_NAME,"[AM]fileName "+fileName);
    try
    out = new BufferedWriter(new FileWriter(fileName,true));
    out.write("\n\n" + debugMsg);
    out.close();
    } catch (IOException e)
    //System.out.println("exception in Writing the log file "+e.toString());
    //createDiag(XXPO_APP_SHORT_NAME,"[AM] Exception in Writing the log file "+e.toString());
    //Only start and end file
    if(debugMsg.indexOf("##########")!=-1)
    String impFileName = logDir+userName+"-pdt_imp_log.log";
    // System.out.println("File Name is "+fileName);
    try
    out = new BufferedWriter(new FileWriter(impFileName,true));
    out.write("\n\n" + debugMsg);
    out.close();
    } catch (IOException e)
    createDiag(XXPO_APP_SHORT_NAME,"[AM] exception in Writing the log imp file "+e.toString());
    With regards,
    Kali.
    OSSI.

  • Problem in extracting from the import transport set

    i am exporting all of the major page groups and providers from my source box to the target machine. both are using the same version database, app server and portal version 904.
    for one of the biggest provider, i created transport sets in the export system, ran the script for NT to create a dump file. moved dump files to the import system, imported its transport set from this file. now i am trying to extract the imported transport set.
    here i am facing problems one after the other.
    1. first is this that this extraction from the import transport set is costing too much time. more than 36 hours have already passed and it has not finished.
    2. it does seem to populate though as after several hours i see the log file with its lines showing a message like .... 'importing component ... so and so ... '. however, the whole process is not finishing or going fast. there is no other process running on this target box.
    3. and surprisingly, the log files update the start time of the script every few hours.
    can somebody please help? this is a very time-sensitive request.
    thanks.

    Hi,
    if i remember exactly i thought this error due to field selections in R/3 whatever the fields u have given in selection tab in rsa3 in ECC give the same selections while scheduling the infopackage than u can able to see the records in psa for full load.
    after doing dis still problem persists revert me.
    Regards,           
    shiva                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         

  • AR FORM 에서 DEBUG LOG 생성하는 방법

    제품 : FIN_AR
    작성날짜 : 2006-05-24
    AR FORM 에서 DEBUG LOG 생성하는 방법
    ============================
    PURPOSE
    AR 의 주요 Form 에서 문제가 발생했을 경우 Debug log 를 발생 시켜 확인 할 수 있도록 한다.
    Explanation
    Debugging을 제공하는 주요 AR Forms:
    Transactions workbench (ARXTWMAI)
    Receipts workbench (ARXRWMAI)
    Collections (ARXCWMAI)
    단, ARXCUDCI Form 은 해당 하지 않는다.
    Debug Setup 순서
    주의)
    만약 11.5.9 이상의 경우라면
    <AR: Enable Debug Message Output> Profile 이 user level 에서 enable 되어 있어야 한다.
    11.5.10 이면서 ARP_STANDARD (ARPLSTDB.pls) Version이 115.36이며 아래와 같이 작업 한다.
    To turn on the debugging:
    Set the following profiles:
    FND: Debug Log Enabled = YES
    FND: Debug Log Filename = NULL
    FND: Debug Log Level = STATEMENT (most detailed log)
    FND: Debug Log Module = % or ar% or even ar.arp_auto_rule%
    만약 11.5.10 MAINTENANCE PACK (Patch 3140000) 을 적용한 경우라면
    <FND: Debug Log Enabled> 이 <FND: Log Enabled> 로 그리고
    <FND: Debug Log Level> 가 <FND: Log Level> 로 대체 될 것이다.
    1. 문제가 발생한 Form 에 접속하여 문제 재현을 시작 한다. 문제가 발생하기 직전 까지 진행 한다.
    2. menu에서 help/diagnostic/examine 을 Click 한다.
    3. Block LOV 에서 PARAMETER 를 선택한다.
    4. Field 란에는 AR_DEBU_FLAG 를 직접 입력한다.
    5. Value 에는 다음과 같이 입력한다.
    FS <path> <file>
    FS 와 patch, file 사이에는 반드시 blank 가 있어야 함을 명심한다.
    a)
    FS
    FS creates a file, displaying Forms and Server debug messages.
    b)
    <path>
    This is where the AR Debug Log file will be written to.
    To find out where the log file must be saved, type the following in
    SQL*Plus:
    select value from v$parameter
    where upper(name) = 'UTL_FILE_DIR';
    c) Decide what to call the debug log file. For example, debug2.dbg
    If the file specified already exists, the new debug messages will be
    appended to it. In general, it is recommended to define a new debug file for each problem reported.
    d)
    Full example:
    Block: PARAMETER
    Field: AR_DEBUG_FLAG
    Value: FS /sqlcom/log debug2.dbg
    위와 같이 작업 한 후 Error 가 나도록 재현 한다.
    생성된 debug log 를 확인 한다.
    Example
    N/A
    Reference Documents
    Note 152164.1 -- How To Create A Debug Log File In An AR Form

    Nope

  • DNS debug logging to determine if 2008R2 DNS server is still being queried?

    We are removing multiple DNS servers. We have removed those servers from DNS reference from all dhcp scopes, and ran script across all windows servers to make sure they are not being used.  There still might be Linux/telecom/workstations manually
    set that we do not know about, so I was going to use debug logging just to look for queries that are coming and to try and determine where they came from. I have debug logging set to log packets for Incoming, UDP, Queries/Transfers, and Request, but having
    trouble figuring out this log.  Here is an example of what I am seeing.  Does anyone have a reference for this?  I am having trouble finding one.
    Is this saying that a computernamed POS188 made a query for a host(A) record? or is it saying that a query came in from 128.1.60.221 for a host(A) record named POS188?  Just looking for reference to help explain this.  The good news is the only
    IP address that shows(128.1.60.221) is the IP of the DNS server, but the only thing I could find for reference was this -
    http://technet.microsoft.com/en-us/library/cc776361(WS.10).aspx
    9/16/2014 12:05:23 AM 1144 PACKET  000000000624CBD0 UDP Rcv 128.1.60.221    8952   Q [0001   D   NOERROR] A     
    (8)POS188(11)contoso(3)com(0)
    Thanks,
    Dan Heim

    Hi Dan,
    The example you provide means that the DNS server receive a packet sent by 128.1.60.221,and the packet is used to query a host(A) record named POS188.contoso.com.
    And the detail information about each field of the message is shows below. It is also showed in the dns.log.
    Message logging key (for packets - other items use a subset of these fields):
                    Field #  Information         Values
                       1     Date
                       2     Time
                       3     Thread ID
                       4     Context
                       5     Internal packet identifier
                       6     UDP/TCP indicator
                       7     Send/Receive indicator
    8     Remote IP
                       9     Xid (hex)
                      10     Query/Response      R = Response
                                 blank = Query
                      11     Opcode             
    Q = Standard Query                                                                                                                                   
    N = Notify
    U = Update
    ? = Unknown
                      12     [ Flags (hex)
                      13     Flags (char codes)  A = Authoritative Answer
    T = Truncated Response
                                 D = Recursion Desired
    R = Recursion Available
                      14     ResponseCode ]
    15     Question Type
    16     Question Name
    The corresponding example:
    9/16/2014 12:05:23 AM 1144 PACKET  000000000624CBD0 UDP
    Rcv 128.1.60.221    8952   Q [0001  
    D   NOERROR] A     
    (8)POS188(11)contoso(3)com(0)
    Best Regards,
    Tina

  • Unable to set max log file size to unlimited

    Hi all,
    Hoping someone can give me an explanation of an oddity I've had. I have a series of fairly large databases. I wanted the make the database log files 8GB in size with 8GB growth increments with
    unlimited maximum file size. So, I wrote the script below. It seems to have worked but the database max size doesn't show as unlimited, it shows as 2,097,152MB and cannot be set to unlimited using a script or in SSMS by clicking on the unlimited radio button.
    2TB is effectively unlimited anyway but why show that rather than actually setting it to unlimited?
    USE
    [master]
    GO
    --- Note: this only works for SIMPLE RECOVERY MODE. FOR FULL / BULK RECOVERY modes you need to backup the transaction log instead of a CHECKPOINT.
    DECLARE
    @debug
    varchar(1)
    SET @debug =
    'Y'
    DECLARE
    @database
    varchar(255)
    DECLARE
    @logicalname
    varchar(255)
    DECLARE
    @command
    varchar(8000)
    DECLARE
    database_cursor
    CURSOR LOCAL
    FAST_FORWARD FOR
    select
    DB_NAME(database_id)
    as DatabaseName
    name
    as LogicalName
    from
    master.sys.master_files
    where
    file_id
    = 2
    AND type_desc
    = 'LOG'
    AND physical_name
    LIKE '%_log.ldf'
    AND
    DB_NAME(database_id)
    NOT IN('master','model','msdb','tempdb')
    OPEN
    database_cursor
    FETCH
    NEXT FROM database_cursor
    into @database,@logicalname
    WHILE
    @@FETCH_STATUS
    = 0
    BEGIN
    SET
    @command
    = '
    USE ['
    + @database
    + ']
    CHECKPOINT
    DBCC SHRINKFILE('''
    + @logicalname
    + ''', TRUNCATEONLY)
    IF
    (@debug='Y')
    BEGIN
    PRINT
    @command
    END
    exec
    (@command)
    SET
    @command
    = '
    USE master
    ALTER DATABASE ['
    + @database
    + ']
    MODIFY FILE
    ( NAME = '''
    + @logicalname
    + '''
    , SIZE = 8000MB
    ALTER DATABASE ['
    + @database
    + ']
    MODIFY FILE
    ( NAME = '''
    + @logicalname
    + '''
    , MAXSIZE = UNLIMITED
    , FILEGROWTH = 8000MB
    IF
    (@debug='Y')
    BEGIN
    PRINT
    @command
    END
    exec
    (@command)
    FETCH
    NEXT FROM database_cursor
    INTO @database,@logicalname
    END
    CLOSE
    database_cursor
    DEALLOCATE
    database_cursor

    Hi,
    http://technet.microsoft.com/en-us/library/ms143432.aspx
    File size (data)
    16 terabytes
    16 terabytes
    File size (log)
    2 terabytes
    2 terabytes
    Thanks, Andrew
    My blog...

  • Debugging Logs -endeca

    Hi all,
    We have noticed that while searching for certain items,our local instance of endeca is going down.
    We want o debug this both at our code end and also at the endeca end.
    We can use the endeca ref app to check the results but if we want to find out what the root cause of the issue is,is there a debug log location on the endeca side?
    The draph log just prints the query fired and no other exception
    After this when we do a baseline update the below error comes.We want to narrow down as to what is causing the EAC to come down.
    Any pointers?
    [11.18.12 13:16:40] SEVERE: Caught an exception while trying to remove application.
    Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
    com.endeca.soleng.eac.toolkit.application.Application removeDefinition - Caught exception while defining application 'XXXSearch'.
    Caused by org.apache.axis.AxisFault
    org.apache.axis.AxisFault makeFault - ; nested exception is:
    java.net.SocketTimeoutException: Read timed out
    Caused by java.net.SocketTimeoutException
    java.net.SocketInputStream socketRead0 - Read timed out
    Setting EAC provisioning and performing initial setup...
    Saludos
    Kartheek

    Hi,
    you might want to take a look at your process.0.log (Endeca\PlatformServices\Workspace\Logs....)
    and if you want to have more logs from your Dgraph to put "-v" arg for your Dgraph Defaults in your appconfig (where you indicate the n° of threads)
    Exerpt from Endeca documentation
    Tomcat/Catalina logs (e.g. catalina.out, catalina.[date].log, tomcat_stdout.log) -
    These logs are useful when errors occur while loading the EAC or Workbench. For example,
    if the context configuration for EAC specifies the wrong path for the EAC WAR file, an error
    will occur when starting the Endeca HTTP Service and will be logged in these log files.
    Alternatively, a "clean" startup of the Endeca HTTP Service will result in no exceptions and a
    successful output of a message like "Server startup in [n] ms."
    Main log (e.g. main.0.log) - Most EAC logging goes into this log file. For example,
    exceptions thrown when invoking a shell or component are logged here. I tried to invoke a
    utility on a non-existent host and found an exception logged in this file: "The host 'foo' in
    application 'bar' does not exist.“
    Invocation log (e.g. invoke.0.log) - This file contains logs associated with EAC Web
    service invocations. For example, this file will record the exact XML content of web service
    requests and responses.
    Logs can also be written into the sub-directories: Shell, Archive, Copy.
    Messages previously logged to emgr.log in v5.1.x are now logged to webstudio.log in v6.x.
    hope that helps
    regards
    Saleh

  • [SOLVED]Couldn't open file for 'Log debug file /var/log/tor/debug.log'

    Hello,
    I'm trying to run a tor relay on my arch linux box. Trying to launch the tor daemon, here's the log via
    $ systemctl status tor.service
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.877 [notice] Tor v0.2.4.21 (git-505962724c05445f) running on Linux with Libevent 2.0.21-stable and OpenSSL 1.0.1g.
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.877 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.877 [notice] Read configuration file "/etc/tor/torrc".
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.909 [notice] Opening Socks listener on 127.0.0.1:9050
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.909 [notice] Opening OR listener on 0.0.0.0:9798
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.000 [warn] Couldn't open file for 'Log debug file /var/log/tor/debug.log': Permission denied
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.000 [notice] Closing partially-constructed Socks listener on 127.0.0.1:9050
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.000 [notice] Closing partially-constructed OR listener on 0.0.0.0:9798
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.000 [warn] Failed to parse/validate config: Failed to init Log options. See logs for details.
    May 20 11:53:10 arch tor[21726]: May 20 11:53:10.000 [err] Reading config failed--see warnings above.
    May 20 11:53:10 arch systemd[1]: tor.service: main process exited, code=exited, status=255/n/a
    May 20 11:53:10 arch systemd[1]: Unit tor.service entered failed state.
    Why the tor daemon cannot write into /var/log/tor/debug.log ?
    Here's my /etc/group
    root:x:0:root
    bin:x:1:root,bin,daemon
    daemon:x:2:root,bin,daemon
    sys:x:3:root,bin
    adm:x:4:root,daemon,nue
    tty:x:5:
    disk:x:6:root
    lp:x:7:daemon
    mem:x:8:
    kmem:x:9:
    wheel:x:10:root,nue
    ftp:x:11:
    mail:x:12:
    uucp:x:14:
    log:x:19:root
    utmp:x:20:
    locate:x:21:
    rfkill:x:24:
    smmsp:x:25:
    http:x:33:
    games:x:50:
    lock:x:54:
    uuidd:x:68:
    dbus:x:81:
    network:x:90:
    video:x:91:
    audio:x:92:
    optical:x:93:
    floppy:x:94:
    storage:x:95:
    scanner:x:96:
    power:x:98:
    nobody:x:99:
    users:x:100:
    systemd-journal:x:190:
    nue:x:1000:
    avahi:x:84:
    lxdm:x:121:
    polkitd:x:102:
    git:x:999:
    transmission:x:169:
    vboxusers:x:108:
    tor:x:43:
    mysql:x:89:
    Last edited by giuscri (2014-05-20 12:18:56)

    SidK wrote:You must have modified your torrc to print to that log file. systemd starts the service as the tor user (see /usr/lib/systemd/system/tor.service). So if if you want to log to a file the tor user must have write access to it. By default however tor it set to log to the journal, which doesn't require any special permissions.
    Yes. I did edit the torrc file since I wanted the log to be store in that file. Indeed
    ## Logs go to stdout at level "notice" unless redirected by something
    ## else, like one of the below lines. You can have as many Log lines as
    ## you want.
    ## We advise using "notice" in most cases, since anything more verbose
    ## may provide sensitive information to an attacker who obtains the logs.
    ## Send all messages of level 'notice' or higher to /var/log/tor/notices.log
    #Log notice file /var/log/tor/notices.log
    ## Send every possible message to /var/log/tor/debug.log
    Log debug file /var/log/tor/debug.log
    ## Use the system log instead of Tor's logfiles
    Log notice syslog
    ## To send all messages to stderr:
    #Log debug stderr
    I missed the file systemd uses to choose who's the process owner.
    Course, I could edit /usr/lib/systemd/system/tor.service such that root will become the process owner; or, I could add the user I use everyday in the root group, then change the permission of /var/log/tor/debug.log such that it will be writable also for the folks in the root group.
    Yet they both seems to be a bit unsafe ...
    What is the best choice, to you guys?
    Thanks,

  • What is the extension for debug log file name?

    Hi,
    1) Please let me know what is the extension for debug log file....
    is it .dbg or .log ?
    in one of the sr the service engineer asked to put .log as the extension is this correct?
    here is what he said...
    a) Enable the following Profiles at user level:
    OM: Debug Level = 5
    INV: Debug Trace = YES
    INV: Debug Level = 11
    INV: Debug File = [directory value from above query]/logfilename.log
    (make sure that you have write permission for this file and directory)
    WSH: Debug Enabled - Yes
    WSH: Debug Level - Statement
    2) do the end-user (for whom trace is enabled) need to have read and write permissions on the directory and the log file?
    Thanks
    Raju

    user652672 wrote:
    1) Please let me know what is the extension for debug log file....
    is it .dbg or .log ?
    INV: Debug File = [directory value from above query]/logfilename.log
    (make sure that you have write permission for this file and directory) It is what you set in the value of "INV: Debug File" profile option (according to the above value, it will be logfilename.log).
    2) do the end-user (for whom trace is enabled) need to have read and write permissions on the directory and the log file?No, just make sure the directory is writable by the oracle and applmgr users.

  • Debug logging for mod_ossl

    Hi,
    I tried to set log level for mod_ossl to debug, but the actual "logging depth" of mod_ossl messages seems to only go as far as the "trace" level. That is:
    even when log level is set to debug, the actual log only shows "trace" (od higher level) messages, there are no "debug" mesages.
    Has anyone else come across anything similar? Is that a
    bug, a feature or just my mistake?
    ias version: 9ias903,
    settings:
    Directive:
    <VirtualHost blahblah... >
    SSLLogLevel debug
    </VirtualHost>
    Will be grateful for any kind of input....
    Thanks,
    Mateja

    John
    I'm not aware of any debugging options. Watching with Ethereal might reveal
    something. But I've always found connecting via the path to the domain to be
    more reliable than connecting via the domain object. So I routinely read the
    "NGW: Location" attribute of the domain object, and attempt to connect
    directly via the path if connection via the domain object fails.
    John
    "John Pozdro" <[email protected]> wrote in message
    news:P9SOi.10734$[email protected]. .
    > Our client app is seeing ConnectByDN fail under some circumstances,
    > despite always using the same DN and tree as inputs. I suspect
    > something rotten in our connection management, but need some more
    > data to debug this further. Is there some debug logging that I
    > can enable on client or server that would shed some light on what's
    > going on with the failed connection?
    >
    > We're running the client as a Windows service in a NetWare 6.5/
    > eDir 8.8/GroupWise 7 environment.
    >
    > Thanks in advance,
    > John
    >
    >

  • Error while displaying Transport Sets

    Dear All
    I just imported a portal page group using import utility.
    Now after this, when I click on Transport set name in Portal builder, it display
    Error: Transport Set mycompany does not contain any objects. (WWU-52926)
    Can you guys please tell me where is the mistake.
    Morover, I am looking for a sample portal page group (script file and export dump file), that I want to deploy. So if anyone can please send me, I will really appreciate.
    Neeraj

    Hi Yogesh,
           I haven't installed any patch on my client machine.
                                                                                    Regards,
                                                                                    Dilip Kumbhar

  • How to set the Logging in SAP Web AS through Netweaver 7.1 ?

    Hi,
    Can you please help us with the steps to set the Log Level and Log files using SAP Netweaver 7.1
    Thanks

    WHat kind of logs are you trying to set? If these are work process logs you can set on SM50, for example.
    Regards,
    Tiago

Maybe you are looking for

  • To add number of senders mail addresses

    in the following code how   there is one sender address how do i make it to  3 senders i mean add the number of senders REPORT  ZSPOOL5 NO STANDARD PAGE HEADING. PARAMETER: P_EMAIL1 LIKE SOMLRECI1-RECEIVER,            P_SENDER LIKE SOMLRECI1-RECEIVER

  • Ghost32 image on my X61s = error message with winload.exe and 0xc000000f

    hi scuse my poor english. i have a deployment to do with 200 notebooks : r61 & x61s. i create a WDS server with a boot.wim and a Ghost Solution Suite Server 2.5 with images of R61 & X61s on the same server 2003. i start my R61, boot on the F12, start

  • Dual boot windows 7 and solaris 11

    Hi Friends, Could some one please explain the steps to install dual boot windows 7 and solaris 11? Thanks Raja

  • Oracle error - Open problem

    hi friends, on accessing the database i get this following error: Open Problem - ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Execution Problemnull How can i resove it...

  • SOAP adapter does not work

    Hello experts, I've a provider proxy scenario where a DB table on the ECC backend system must be updated. I used a receiver SOAP adapter with following parameters: URL: http://<host>:<pmort>/sap/xi/engine/?type=receiver and user authentication for th