Basic question on log and temp files

Hi,
I need to create a new database in this way:
- 2 GB for the data file
- 1 GB for log and temp files
This is the command I'm issuing to create the db:
create tablespace xyz datafile '<path>\xyz.dbf' size 2000M reuse;
How can I say the size of log and temp files?
Thank you
Nicola
Ps-I'm on windows platform

user575754 wrote:
- 1 GB for log and temp filesWhat's the link between log file size and tempfile ?
This is the command I'm issuing to create the db:
create tablespace xyz datafile '<path>\xyz.dbf' size 2000M reuse;This command does not actually create a database, but a tablespace. If you ran it, that means your db aleady exists.
So, not sure to follow you on this thread.
Nicolas.

Similar Messages

  • Delete log and temp files

    Hi. i want to delete files in folders
    appltmp and logs, because they both are using 4GB in my HDD.
    is it posible to delete the files into those folders.??
    thaks

    My comment wasn't directed at you 909592, but to the OP :)
    This question comes up quite a bit here, I just thought it was funny how the question was if it was "possible", not if it "should" be done :)
    Here is a thread that covers it pretty well:
    Can i delete files under $APPLTMP folder in EBS R12.0.4?
    Cheers.

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • APIs for accessing the ESS log and output files from UCM

    Hi,
    As I understand, the output and log files of an ESS job will be uploaded to UCM.
    We have a requirement where in we are building a simple error handling framework, which gets triggered after a series of ESS jobs are run. In the error handling framework based on some processing logic we need to e-mail the error or log files to the users. I am not able to find any information related to retrieving the log and output files for and ESS job from UCM. Any inputs on this will be appreciated.
    Thanks for your help,
    Thiru

    For accessing content in UCM you can use the RIDC APIs or UCM Web Services. Note that the UCM also provides features for workflow etc. that could potentially be used for notifications refer to the developer guide for details. For UCM related questions you may want to use this forum instead
    Jani Rautiainen
    Fusion Applications Developer Relations
    https://blogs.oracle.com/fadevrel/

  • Time machine backups cache and temp files?

    Just bought a new external usb disk and used it as the time machine backup volume. All time machine configurations are remained as default, so it would backup all my current HD. While watching it estimating the time that the first backup would take, a question jump out in my mind:
    Will time machine backup all files "honestly"? Will it make a copy of every file including the Internet cache and temp files when I'm browsing websites? I'm used to watch youtube a lot, and the temp flv file would be large (though I don't know where it is stored on Leopard). If these files would be backup-ed as well, I think it would be a great waste of disk space.
    Can anyone confirm that those caches and temp files would be copied or not? If it would, how to prevent this? Thank you.

    If you wish to exclude anything from the backup simply drag the item into the Exclude list. For example, you can save considerable space by excluding the /System/ folder. Cache files would be found in the /Library/ and /Home/Library/ folders. Each folder contains a folder named Caches. There is no specific folder in which temporary items as stored that is visible to the user because the operating system will usually delete temporary files on its own. This may not be the case for a specific program, however. Browsers may maintain their own reserved cache and temporary files, and manage them on their own.
    I do not think, however, that TM backs up caches or temporary files. I find no such items in my TM backup - only the folders. I do not back up my /System/ folder.

  • Basic question about storage and safety in iMovie '11

    A very basic question about storage and safety:
    I want to keep a backup of my raw footage on my external hard drive prior to begin working on the movie in iMovie. I want to do this:
    1. Upload the digital files from my camcorder to my Desktop *(as opposed to iMovie)*
    2. Duplicate that footage/clips.
    3. Put the duplicate clips/footage into a folder labeled with the name of that footage (like "Fun At the Dentist" or "Jimmy Learns How to Yodel").
    4. Drag the folder with the footage/clips into my external hard drive into a pre-existing folder titled "Backups/duplicates of all of my raw footage/clips." This big granddaddy folder will house all of the child folders of different movies.
    5 Then, open up iMovie '11 and import the raw footage/clips from my Desktop rather than from my camcorder.
    6. Then I want to make a duplicate of my finished movie and put it in my external hard drive in a "Finished Movies" folder.
    I know that the original raw unedited footage will always be in iMovies '11 but I want the original to also exist immediately accessible in my external hard drive.
    QUESTION:
    Is this viable? Is it wise? (I know it goes an extra unnecessary step, but aside from that.)
    *Do you have any precautionary advice?* Should I do something in my iMovies '11 preferences? What?
    In earlier years with iMovie '04 or '06 (cannot recall) I made many novice errors and ended up losing audio to my finished movie. I also lost footage.
    This time around with iMovie '11 I don't to make such novice, ignorant errors.
    Thanks so much for any comments to this question.
    -L

    Yes I'm sure it will work great for you.
    The iFrame format is something Apple has come up with. The reason for its existence is unknown to me so I can only speculate. But it seems to me that Apple "invented" this format in order to have devices such as Ipod/Ipad/Iphone/Ixxx create clips that are editable on consumer hardware such as already mentioned devices but also standard Mac computers, without the need for format conversion.
    iMovie converts most input formats during import, which takes a lot of time, and this need for conversion often comes as a surprise to most people new to home video editing.
    iFrame has a resolution of 960x540 which is long way from the common standards of 1920x1080 and 1280x720. If your end target is YouTube however, this may not be too bad though. However if you intend to go with YouTube HD, you may find iFrame footage to look wrong since they are effectively upscaled to a higher resolution.
    Technically iFrame uses the H.264 algorithm, a smaller frame size (960x540) and a rather low compression scheme. This will result in large files, but the plus side is that the files are ready for editing without the need for any conversion and iMovie will natively edit the files.

  • How to get Log and Output File Names for a concurrent request

    Hi,
    I am submitting a concurrent frm OAF with the following code in AM
    try{
    OADBTransaction tx = getOADBTransaction();
    Connection conn = tx.getJdbcConnection();
    ConcurrentRequest cr = new ConcurrentRequest(conn);
    Vector parameters = new Vector();
    parameters.addElement("10");
    nRequestID= cr.submitRequest("CIE","DTFEMP","","",false,parameters);
    tx.commit();
    }catch(RequestSubmissionException e)
    How do i get the handle to log and output files for the abvoe concurrent request ?
    One more thing is there a way where we can evaluate the environment variables
    like in the above example once i get a the request id
    logfile = $APPLCSF/$APPLOUT/"l"+requestID+".log"
    and
    outputfile=$APPLCSF/$APPLOUT/"o"+requestID+".out"
    is there a way i can get the values of $APPLCSF and $APPLOUT from the os ?
    Thanks
    Tom...
    Thanks
    Tom ...

    You can query the Fnd_Concurrent_Requests table using Request_ID, which has the log & out file directory details.
    Hth
    Srini

  • Timestamp in Java log and trace files.

    Hi SAP'ies
    Running PI 7.11 on AIX 6.3 we face an issue with the content of log and trace files from Java.
    Eg.
    The file DefaultTrace_00.0.trc is timestamped 18-05-10 11:16:15. (The same time as the time of the OS/AIX)
    Looking inside the file the last statement is timestamped 2010 05 18 09:16:15.
    How can we ensure that the content of these Java files are timestamped with OS time?
    Looking into ABAP files like dev_w0 the timestamp of the file and the content are equal.
    Best regards,
    Teddy Løv Andersen

    Hello All
    Best way to convert the default trace time is , visit the site
    http://www.csgnetwork.com/epochtime.html
    here enter , for example if you have the following in default trace
    #1.#00265510DE7300120000000F000022030004C77AD7309DF5#1345230317066#com.sap.portal.fpn.rdl
    1345230317066   is time stamp , enter this in the above site to get the time
    Fri Aug 17 2012 21:05:17 GMT+0200
    Regards

  • Block size in tt for writing data to transaction log and checkpoint files

    Hello,
    what block size is TimesTen using when its writing data to transaction log and checkpoint files? Does it use some fixed block size during filesystem writes?

    Although in theory logging can write 2 KB blocks in almost all circumstances it will write 4 KB or larger so yes a filesystem with a 4 KB block size is fine for both checkpointing and logging.
    Chris

  • Useful logs and trace files

    Hello experts, for our Netweaver AS administration, I am in charge of periodically checking logs and trace files. I would like to know which are the most useful logs and trace files and the information each one will hold. I am familiar with "DefaultTrace.trc", and as of today it is the only one I have used, but I believe I should also be looking at other logs and trace files.
    Any suggestions?

    Hi Pedro,
    If you are talking about JAVA only system defaulttrace is the best log/trace to look, there are other log files like application log, but maybe the best way to check you logs is using NWA (NetWeaver Administrator) on the following URL on your JAVA system:
    http://<hostname>:<port>/nwa
    From there you need to go to Monitoring -> Logs and Traces and then Predefined View/SAP logs.
    My other recommendation is to change the severity level to ERROR for all you JAVA component within the Visual Administrator -> ServeNode -> Services -> Log Configurator -> Locations, otherwise it is possible that you see a lot of garbage on the defaulttraces. Anyway you can change the severity level per component, on demand, to investigate any possible problem.
    The work directory is very imporant and maybe you can also check the file "dev_serverX" that also will give you information about any out of memory conditions and garbage collection activity if you have these values set for the server node using the config tool:
    -verbose:gc
    -XX:+PrintGCDetails
    -XX:+PrintGCTimeStamps
    You can find more information on here:
    http://help.sap.com/saphelp_nw70/helpdata/en/ac/e9d8a51c732e42bd0e7de54b9ff4e2/content.htm
    Hopefully this help you, let me know if you need more information,
    Zareh

  • Win 2008  WL 10.3.3 stdout appearing in .log and .out files

    Recently noticed a ballooning [ServerName].out file in the logs directory. In weblogic management console I do have it configured to redirect stdout and stderr to weblogic logging (.log file). Both the .log and .out file contain the same stdout/stderr information. I would like to eliminate the .out file if possible (since WL only rotates the .log), but cannot find where it is configured. The managed servers are NOT windows services (no -log option).
    Did not find any logging parameters in JAVA_OPTIONS or paramters in the startManagedSvc.cmd file.
    Is this something needing to be corrected at the application level? (log4j)

    opie wrote:
    Recently noticed a ballooning [ServerName].out file in the logs directory. In weblogic management console I do have it configured to redirect stdout and stderr to weblogic logging (.log file). Both the .log and .out file contain the same stdout/stderr information. I would like to eliminate the .out file if possible (since WL only rotates the .log), but cannot find where it is configured. The managed servers are NOT windows services (no -log option).
    Did not find any logging parameters in JAVA_OPTIONS or paramters in the startManagedSvc.cmd file.
    Is this something needing to be corrected at the application level? (log4j)Depends on what you are actually seeing in those files. Are you outputting log4j messages to a log file AND the console?
    Here is a snippet of the log4j configuration file that denotes writing to the console:
        <appender name="ConsoleAppender" class="org.apache.log4j.ConsoleAppender">
            <layout class="org.apache.log4j.PatternLayout">
                <param name="ConversionPattern" value="%d{yyyy-MM-dd hh:mm:ss} %-5p [%t] - %C{1}.%M -> %m%n" />
            </layout>
          </appender>
        <root>
            <level value="ALL" />
            <appender-ref ref="ConsoleAppender" />
        </root>Edited by: ForumKid2 on Dec 29, 2010 11:36 AM

  • Location of Redo log and control files?

    Dear all,
    I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
    What could be the location of control files?
    Amy

    select name
      from v$controlfile
    or
    show parameter control_filesKhurram

  • External portal capturing internal portal URL in Log and trace file

    Hi,
    We are facing one issue in portal like we have two portals for internal (Intranet) and external (Internet) users.
    Once users logged in the application and try to get the information about mylink from the external portal link (internet) they should not get any information about the internal portal.
    But in log and trace file we can see the external portal link capturing the internal portal URL.
    We need to find, from where system capturing the internal portal URL.
    Thanks.

    The tkproffed trace file is in seconds.
    "set timing" is in hh:mi:ss.uu format. So 00:00:01.01 is 1.01 seconds.
    You have to remember that most of these measurements are rounded. While your trace file says it contains one second of trace data, you know it's more.
    One excellent resource for trace files is "Optimizing Oracle Performance" by Cary Millsap & Jeff Holt. (http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X ) I thought I knew trace files before, but this book brings your knowledge to a whole new level.
    There is also an excellent WP by Cary Millsap ( http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap ) that gives you some insight.

  • How to configure logs and trace files

    Hello people,
       We have just implemented ESS-MSS, we have around 25000 people using this service and every 2 days my logs and trace file in server gets full and portal gets down.
    Please suggest how to solve this problem,how can i reduce trace and log files,,,,,any configuration or setting is there to configure this...please suggest and explain how can it be done.
    Biren

    Hi,
    You can control what messages gets logged depending on the severity.
    This can be configured using Log Configurator, check this how you can set severity to different locations.
    Netweaver Portal Log Configuration & Viewing (Part 1)
    Regards,
    Praveen Gudapati

  • How to overwrite a log and bad file in external table in oracle 10g

    Hi,
    I have used external table in oracle 10g.whenever use select query in external table orace internally create one log file in specified directory,
    but this log file is growing.How can i overwrite the log file(old to replace with new).I need overwrite a log and bad file in external table.
    kindly give the solutions.
    By
    Siva

    I don't believe that is possible with the LOGFILE clause, but it may be with the BADFILE clause. Here is an excerpt from the documentation :
    The LOGFILE clause names the file that contains messages generated by the external tables utility while it was accessing data in the datafile. If a log file already exists by the same name, the access driver reopens that log file and appends new log information to the end. This is different from bad files and discard files, which overwrite any existing file. NOLOGFILE is used to prevent creation of a log file.
    If you specify LOGFILE, you must specify a filename or you will receive an error.
    If neither LOGFILE nor NOLOGFILE is specified, the default is to create a log file. The name of the file will be the table name followed by _%p and it will have an extension of .log.

Maybe you are looking for

  • Error report when I try to open photoshop

    I have jut downloaded a trial version of Photoshop CS4 but whenever I try to open it it just comes up about an error and it needs to close and would I like to send an error report to Microsoft; very frustrating! Does anyone know of a solution?

  • Disappointing lack of support for Unicode

    I was very disappointed to find that Pages 2 still cannot properly support Unicode. TextEdit does a vastly better job of supporting Unicode. I can paste Unicode text into Pages 2 that I have already edited in TextEdit, but it cannot be properly typed

  • How to Set the Paper Size in Smart Forms

    Hi Friends,                  I need to SET the Paper Size to default A4 in a smart form...and in some other case we need to set to other type 8.5 X 11...How can i set it....?

  • T500 LAN failed

    Hi, I just got my new T500, but I have difficulties to connect to the internet via LAN (stated limited or no connectivity). I called Comcast to make sure nothing wrong with the IP address, and they said seems fine. I even tried to renew the IP via cm

  • HT201210 what to do with an error 4013 ???

    can not update due to recievicing error code 4013. What is error 4013 ????