Tmp file locations and Log file locations

 

I have been having a real headache too trying to get WebLogic to put all its
log files and temporary files in directories that I specify. It seems that
WebLogic has a mind of its own as files get created all over the place.
Trying to configure these really basic settings has proved extremely
awkward. Why is it such a nightmare to do?
"Scott Jones" <[email protected]> wrote in message
news:3af0179d$[email protected]..
OK, I changed the relative path for the log files.
1. I am still getting app-startip.log
app0000.tlog
in the root directory and not in the ./logs directory. Any other
settings?
2. I sill do not know how to redirect the tmp_ejbdomain.port directory.
Any suggestions?
Scott
"Sanjeev Chopra" <[email protected]> wrote in message
news:3aef0a42$[email protected]..
"Scott Jones" <[email protected]> wrote in message
news:3aef05be$[email protected]..
I a domain configured and running with two applications. WLS 6 is
placing
the following logs for each application at the same dir level as
config
dir. It is also creating tmp_ejb directory at the same level.
1. How do I tell WLS 6 to place log files in a diff directoryIn Admin Console: modify the property Server -> Configuration ->Logging ->
FileName
In config.xml: the 'FileName' attr can be set to an absolute path OR apath
relative to Server.RootDirectory
<Server EnabledForDomainLog="true" ListenAddress="localhost"
ListenPort="7701" Name="managed"
StdoutDebugEnabled="true" ThreadPoolSize="15">
<Log FileCount="10" FileMinSize="50" FileName="managed.log"
Name="managed" NumberOfFilesLimited="true"
RotationType="bySize"/>
</Server>
2. How do I tell WLS 6 to place tmp_ejb directories in a diff
directory
>>>
Thanks,
Scott

Similar Messages

  • Shell Script to grep Job File name and Log File name from crontab -l

    Hello,
    I am new to shell scripting. I need to write a shell script where i can grep the name of file ie. .sh file and log file from crontab -l.
    #51 18 * * * /home/oracle/refresh/refresh_ug634.sh > /home/oracle/refresh/refresh_ug634.sh.log 2>&1
    #40 17 * * * /home/oracle/refresh/refresh_ux634.sh > /home/oracle/refresh/refresh_ux634.sh.log 2>&1
    In crontab -l, there are many jobs, i need to grep job name like 'refresh_ug634.sh' and corresponding log name like 'refresh_ug634.sh.log '.
    I am thinking of making a universal script that can grep job name, log name and hostname for one server.
    Then, suppose i modify the refresh_ug634.sh script and call that universal script and echo those values when the script gets executed.
    Please can anyone help.
    All i need to do is have footer in all the scripts running in crontab in one server.
    job file name
    log file name
    hostname
    Please suggest if any better solution. Thanks.

    957704 wrote:
    I need help how to grep that information from crontab -l
    Please can you provide some insight how to grep that shell script name from list of crontab -l jobs
    crontab -l > cron.log -- exporting the contents to a file
    cat cron.log|grep something -- need some commands to grep that infoYou are missing the point. This forum is for discussion of SQL and PL/SQL questions. What does your question have to do with SQL or PL/SQL.
    It's like you just walked into a hardware store and asked where they keep the fresh produce.
    I will point out one thing about your question. You are assuming every entry in the crontab has exactly the same format. consider this crontab:
    #=========================================================================
    # NOTE:  If this is on a clustered environment, all changes to this crontab
    #         must be replicated on all other nodes of the cluster!
    # minute        (0 thru 59)
    # hour          (0 thru 23)
    # day-of-month  (1 thru 31)
    # month         (1 thru 12)
    # weekday       (0 thru 6, sunday thru saturday)
    # command
    #=========================================================================
    00 01 1-2 * 1,3,5,7 /u01/scripts/myscript01  5 orcl  dev
    00 04 * * * /u01/scripts/myscript02 hr 365 >/u01/logs/myscript2.lis
    00 6 * * * /u01/scripts/myscript03  >/u01/logs/myscript3.lisThe variations are endless.
    When you get to an appropriate forum (this on is not it) it will be helpful to explain your business requirement, not just your proposed technical solution.

  • WSUS catalogue size, location and log file

    Hi All,
    Quick question, where is the catalogue stored in the WSUS server? I am trying to find the size of the latest update synchronization that the client use and the size in total of the catalogue, is this possible? As our clients caused an issue pulling a 35MB
    catalogue file from the server, does this seem the correct size for a SCCM/WSUS delta update?
    I had a look at WSUS related logs (change and software distribution)but couldn't find the right info, it was a long day so any help appreciated on this as I could simply have missed it in those log files, or a point in the right direction would be great,
    thanks for your help
    many thanks

    Hi,
    Have you install SCCM with WSUS? If yes, to get better help, please post your question on the SCCM forum.
    https://social.technet.microsoft.com/Forums/systemcenter/en-US/home
    Best Regards.
    Steven Lee Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Problem with File Handler and log files

    I'm developing a polymorphic structure for file handling specially for the log file for a server based App... Its working fine except for one thing. The log file that goes into the File Handler comes as a parameter to the class, the problem is that when it writes de file, though it DOES know where it should go, it doesn't do it and it writes the message into some other Log file belonging to another process...
    Does someone know how to avoid this or fix it?? any ideas or tips would be great!!

    Immediately below the Tabs saying "Files" and Assets" is a small box 
    with arrow on the right to show the drop down list.        In the box 
    on the right there's an icon of two networked computers.  Then it 
    says, "ftp://Hill farm Web Site"  which is the name of my website.     
    If I click on the arrows to pull up the drop-down box,  I get four 
    options divided by a line.   Above the line the options are Computer, 
    HD and ftp://Hill farm Web Site.  Below the line it says manage sites.
    Below this is list of files that make up my website in a directory 
    structure.   The header for the first column reads, "Local Files",  
    which appears to be untrue, because the top line in the directory 
    structure below reads,  "ftp://Hill farm Web Site".
    Does this help?
    regards
    David

  • Log file sync vs log file parallel write probably not bug 2669566

    This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
    Version : 9.2.0.8
    Platform : Solaris
    Application : Oracle Apps
    The number of commits per second ranges between 10 and 30.
    When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
    Below just 2 samples where the ratio is even about 20.
    "snap_time"     " log file parallel write avg"     "log file sync avg"     "ratio
    11/05/2008 10:38:26      8,142     156,343     19.20
    11/05/2008 10:08:23     8,434     201,915     23.94
    So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
    First I thought that I was hitting bug 2669566.
    But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
    And I think that it proves that I am NOT hitting this bug.
    Below is a sample of the output for the log writer.
    -- End of snap 3
    HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
    DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
    DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
    DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
    DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
    DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
    DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
    When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
    In the example above 781036 + 210432 = 991468 micro seconds.
    This is the case for all the snaps taken by snapper.
    So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
    So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
    Any clues?

    Yes that is true!
    But that is the way I calculate the average wait time = total wait time / total waits
    So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
    I use the query below:
    select snap_id
    , snap_time
    , event
    , time_waited_micro
    , (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
    , total_waits
    , (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
    , trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
    from (
    select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
    lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
    lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
    lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
    lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
    row_number() over (partition by event order by sn.snap_id) r
    from perfstat.stats$system_event se, perfstat.stats$snapshot sn
    where se.SNAP_ID = sn.SNAP_ID
    and se.EVENT = 'log file sync'
    order by snap_id, event
    where time_waited_micro - p_time_waited_micro > 0
    order by snap_id desc;

  • Process Flow ignores name and location for Control- and Log-Files

    Hi!
    Our OWB Version is 10.1.0.3.0 - DB Version 9.2.0.7.0 - OWF Version 2.6.2
    Clients and server are running on Windows. Database contains target schemas as well as OWB Design and Runtime, plus OWF repositories. The source files to load reside on the same server as the database.
    I have for example a SQL*Loader Mapping MAP_TEXT which loads one flat file "text.dat" into a table stg_text.
    The mapping MAP_TEXT is well configured and runs perfect. i.e. Control file "text.ctl" is generated to location LOC_CTL, flat file "text.dat" read from another location LOC_DATA, bad file "text.bad" is written to LOC_BAD and the log file "text.log" is placed into LOC_LOG. All locations are registered in runtime repository.
    When I integrate this mapping into a WorkFlow Process PF_TEXT, then only LOC_DATA and LOC_BAD are used. After deploying PF_TEXT, I execute it and found out, that Control and Log file are placed into the directory <OWB_HOME>\owb\temp and got generic names <Mapping Name>.ctl and <Mapping Name>.log (in this case MAP_TEXT.ctl and MAP_TEXT.log).
    How can I influence OWB to execute the Process Flow using the locations configured for the mapping placed inside?
    Has anyone any helpfull idea?
    Thx,
    Johann.

    I didn't expect to be the only one to encounter this misbehaviour of OWB.
    Meanwhile I found out what the problem is and had to recognize that it is like it is!
    There is no solution for it till Paris Release.
    Bug Nr. 3099551 at Oracle MetaLink adresses this issue.
    Regards,
    Johann Lodina.

  • Change the Data and Log file locations in livecache

    Hi
    We have installed livecache in unix systems in the /sapdb mount directory where the installer have created sapdata and sapdblog directories. But the unix team has already created two mount direcotries as follows:
    /sapdb/LC1/lvcdata and /sapdb/LC1/lvclog mount points.
    While installing livecache we had selected this locations for creating the DATA and LOG volumes. Now they are asking to move the DATA and LOG volumes created in sapdata and saplog directories to these mount points. How to move the data and log file and make the database consistent. Is there any procedure to move the files to the mount point directories and change the pointers of livecahce to these locations.
    regards
    bala

    Hi Lars
    Thanks for the link. I will try it and let u know.
    But this is livecache (even it uses MaxDB ) database which was created by
    sapinst and morover is there any thing to be adjusted in SCM and as well as
    any modification ot be done in db level.
    regards
    bala

  • 2005 database and log file locations

    Is there a SQL query to list where exactly the database and log files reside for all databases on an instance (sql server 2005)?

    You can query the Information form DMV
    sys.master_files (Transact-SQL)
    SELECT *
    FROM [sys].[master_files]
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Location of Log file

    Hi,
    Could anyone help me to find the  location of log file in crystal reports serverXI. I need the information of crystal reports execution time,parameters used etc. Anything related to crystal reports is fine with me.
    Thanks in advance.
    -Vijay Kanth
    91- 9036431531

    Here are instructions to enable the trace and where it can be found.
    1199303 - How to trace Crystal Report Print Engine errors in BusinessObjects Enterprise XI and Crystal Enterprise    
    Version   1     Validity: 11/27/2007 - active   
    Language   English 
    Edit Show change log Show Internal Memos 
    Content:    Summary   |   Header Data   |   Product   |   Other Properties
    Symptom
    How do I trace Crystal Report Print Engine errors in BusinessObjects Enterprise XI and Crystal Enterprise?
    Cause
    Various error messages appear and poor behavior occurs when viewing Crystal Reports on demand or scheduling reports within Enterprise XI and Crystal Enterprise. Advanced logging techniques can reveal the causes of these issues.
    Resolution
    To help troubleshoot issues, add -crpetrace 7 and -trace to the command line parameters for the Crystal Reports Job Server or the Crystal Reports Page Server.
    The Crystal Reports Job Server handles scheduled reports, and the Crystal Reports Page Server processes on-demand reports.
    Here are the steps:
    Click Start > Programs > BusinessObjects11 > BusinessObjects Enterprise > Central Configuration Manager.
    Right-click the server requiring the advanced logging. Click Stop.
    Right-click the server. Click Properties.
    Go to the end of the Command field.
    Press the spacebar once. Type "-crpetrace 7" and "-trace".
    Click OK.
    Right click the server. Click Start.
    Advanced logging is now enabled.
    The default logging folder path is: <installation directory>:\Program Files\Business Objects\BusinessObjects Enterprise 11\Logging\.
    The CRPE log files will be named similar to the following:
    pageserver_20071108_193611_5008_crpe_bkgrnd.log
    pageserver_20071108_193611_5008_crpe_Diagnostics.log
    pageserver_20071108_193611_5008_crpe_functions.log
    ====================
    NOTE:
    These parameters provide extensive logging. Every call to the Crystal Report Print Engine will be logged. If a support case is still required, it will be helpful to attach the results of this trace to facilitate the diagnosis and expedite a solution.
    ====================
    Keywords
    crpe crpetrace pageserver jobserver enterprise , 463766
    Header Data
    Released on  11/27/2007 06:55:21 by Commodore Tom Concannon (I817183) 
    Release Status  Released to Customer 
    Component  BOJ-BIP 
    Responsible  Commodore Tom Concannon ( I817183 ) 
    Processor  Commodore Tom Concannon ( I817183 ) 
    Category  How To 
    Product
    Product Product Version
    Crystal Enterprise CRYSTAL ENTERPRISE 10
    CRYSTAL ENTERPRISE 9
    Crystal Reports Server CRYSTAL REPORTS SERVER XI
    CRYSTAL REPORTS SERVER XI R2
    Crystal Reports Server, OEM edition CR SERVER EMBED XI R2
    SAP BusinessObjects Enterprise BOBJ ENTERPRISE XI
    BOBJ ENTERPRISE XI R2
    Other Properties
    Business Objects Article ID  463766
    Business Objects ProductFamilyMajorVersion  BusinessObjects Enterprise XI
    Crystal Enterprise 10
    Crystal Enterprise 9
    Crystal Reports Server XI
    Business Objects ProductName  BusinessObjects Enterprise
    Crystal Enterprise
    Crystal Reports Server
    Business Objects ProductMajorVersion  BusinessObjects Enterprise XI
    Crystal Enterprise 10
    Crystal Enterprise 9
    Business Objects BuildVersion  10.0.0.0
    10.2.0.0
    11.0.0.0
    11.0.0.x
    11.0.1.x
    11.0.2.x
    11.1.0.0
    11.2.0.0
    11.3.0.0
    11.5.0.0
    Business Objects SupportQueue  Architecture
    Business Objects ProductLanguage  English

  • Change location of log file from pkg, due to logrotate problem?

    Hello I am now the maintainer of the bacula packages in aur. http://aur.archlinux.org/packages.php?ID=4510
    The package ships with a log rotate file which points to /usr/var/bacula/working/log as the log file. However, log rotate gives
    error: bacula:14 olddir /var/log/archive and log file /usr/var/bacula/working/log are on different devices
    error: found error in file bacula, skipping
    invalid password file entry
    delete line ''? No
    invalid password file entry
    delete line ''? No
    pwck: no changes
    The solution to this is to change the log path to /var/log/bacula. My question is if I should leave the log path as is and have the user change it, or should I patch both the config file and syslog file to contain /var/log/bacula?
    Thanks,
    ~pyther
    Last edited by pyther (2009-01-27 22:41:06)

    It was talking about a trace will be placed under your background dump destination. As you would have using
    alter database backup controlfile to trace;Check your background dump destination where your alert log is.

  • Steps to move Data and Log file for clustered SQL Server

    Hi guys 
    we have Active'passive SQL 2008R2 cluster environment.
    looking for steps to move Data and log files from user Database  and System Database for  SQL Server Clustered Instance. 
    Currently Data and log  files resides on same drive for user and system Databases..
    Thanks
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Try the below link
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/468de435-3432-45c2-a50b-23519cd2686e/moving-the-system-databases-in-a-sql-cluster?forum=sqldisasterrecovery
    -Prashanth

  • System Center 2012 R2 install: SQL server Data file and log file

    This might be a dumb question, but I can't find the answer anywhere.  
    I'm installing a new instance of  System Center 2012 R2 on a new server, I'm stuck on the SQL Server data file section.  Everytime I put in a path, it says that tne path does not exist.  I'm I supposed to be creating some sort of SQL Server
    data file and log file before this installation, I didn't get this prompt when installing System Center 2012 SP1 or hen I upgraded from System Center 2012 SP1 to System Center 2012 R2
    My SQL is on a different server
    Thank you in advanced

    Have you reviewed the setup.log?
    On a side note, why would you put the database file on the same drive as the OS? That defeats the whole purpose of having a remote SQL Server. Why use a remote SQL Server in the first place.
    Jason | http://blog.configmgrftw.com

  • Sql server data file and log file

    hello experts
    what is the best way to save data file and log file in a two node cluster environment. i have an active\passive cluster with windows server 2008r2 enterprise and sql server 2008r2. i am new to the environment and i noticed that all system and user databases
    including their data and log files are stored in one drive, just curious, what is the best practise in such kinds of scenario, thank you as always for your help.

    Make sure  you have valid/tested  backup strategy for both system and user databases.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • How to design SQL server data file and log file growth

    how to design SQL DB data file and log file growth- SQL server 2012
    if my data file is having 10 GB sizze and log file is having 5 GB size
    what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.

    It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
    The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
    the appropriate amount.
    One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
    https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
    Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this 
    SELECT
    [DatabaseName],
    [FileName],
    [SPID],
    [Duration],
    [StartTime],
    [EndTime],
    CASE [EventClass]
    WHEN 92 THEN 'Data'
    WHEN 93 THEN 'Log' END
    FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
    WHERE
    EventClass IN (92,93)
    hope that helps

  • Do we need to format data and log files with 64k cluster size for sql server 2012?

    Do we need to format data and log files with 64k cluster size for sql server 2012?
    Does this best practice still applies to sql server 2012 & 2014?

    Yes.  The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
    Microsoft SQL Server Storage Engine PM

Maybe you are looking for

  • Control file restore without backup

    I installed oracle 10g Backed it up with rman ... backup database Now I lost the control file With out auto backup of control file ..how can i restore my control file and get it in sync with my database file

  • Business area and profit centre field to make mandatory in all transactions

    Dear Team, My client wants to make the above mentioned fields mandatory while entering any business transactions. viz. MM,FI / SD transactions. He wants to capture the details Business area wise as well as Profit Centre wise. Please let me know wheth

  • ARQ: User details fields mappings problem in Access Request

    Dear All, My "User Search Data Sources" are: HR system and LDAP (in this order) and "User Details Data Sources" are: HR system, LDAP, GRC Production system and ERP Development system (in this order) I could search for the users in HR and LDAP systems

  • Heads Up

    This is for anyone that uses Internet Sharing via System Preferences. If you have this enabled then you will not be able to use Audio, Video or Screen Sharing via iChat! Simply turn Internet Sharing off when you want to use these features! pv

  • 2 way sync - but calendar only one way?

    Hi! I've just performed the first 2 way sync between my Outlook 2007 and my Z10. Surprisingly my contacts have not doubled up - which is good. But regarding the calendar the sync worked from my Outlook to my Z10 but the items on my Z10 were not added