Where to keep the log file and the mdf/ndf

Hi ,
While going through the book of Kalen Delaney "inside sql server"  I got statement  "A general recommendation is to put log files onto their own physical disk. Here I am bit confuse that we have to keep log file in same hard
disk separate derive or in separate hard disk itself. On My DB server has only one hard disk, with three logical partition, so if I keep my log file MDF on different derives will I get the performance.
Thanks in advance for reply........
Regards Vikas Pathak

@sateesh : but as per link "http://msdn.microsoft.com/en-us/library/bb402876.aspx"
This rule checks whether data and log files are placed on separate logical drives. Placing both data and log files on the same device can cause contention for that device and result in poor performance. Placing the files on separate drives allows the I/O activity
to occur at the same time for both the data and log files.
This is probably as this is the best they can do. Checking that the logical drives are actually two different physical disks is little out of scope from what you can do in SQL Server.
It is also worth pointing that the rule specifically refers to spinning disks. SSD disks do not have this problem.
Erland Sommarskog, SQL Server MVP, [email protected]

Similar Messages

  • Am using Safari 6.0.5, suddenly my Yahoo mail account won't keep me logged in and the 'keep me logged in' button has disappeared. It is still there on Chrome so am assuming it is a Safari issue. It just happened suddenly. Have reset Safari.

    Am using Safari 6.0.5, suddenly my Yahoo mail account won't keep me logged in and the 'keep me logged in' button has disappeared. It is still there on Chrome so am assuming it is a Safari issue. It just happened suddenly. Have reset Safari, cache and also changed my Yahoo password but problem is still there.

    I still don't understand what the problem that you are experiencing is. If you get to a page where yahoo is asking JUST for your password, and there is no box for username (it may have your username displayed, but not in a enter box), then you ARE "being kept signed in"  You can verify this by instead of entering your password, go to www.yahoo.com home page, and it will say something like "Hi, Amber"   This means you ARE signed in.
    When you get that page asking for just your password, you are NOT being asked to sign in again. You are simply being asked to verify/enter your assword an Nth time. If you get that page right after entering your username and password, try clicking the "back" (previous page) button. It might just go to your inbox.
    If this is the case, your problem has nothing to do with being kept signed in or not, the problem is that for some (unknown to me) reason, yahoo is asking you to re-enter your password, kind of like when your mac asks for your mac password when changing system settings. Yahoo does this as a safety feature when you go to account settings, and apparently abritary other times. I wish I understood just why I get it every time after signing in.
    It could be because I have java turned off. It could be because I don't accept third party cookies. Maybe a cache file. Or a hundred other things... I don't know. It could simply be yahoo is scrooing safari users. I have glanced at the html, but to really understand it, it would take me days. There are over 25 places where username and password are in the html "source" code of the yahoo login page, and even understanding that may not tell me. In the past, I have seen code essentiall saying "if IE x.3.2" do this, if chrome do that, if mozilla do something else, otherwise make it next to impossible...."
    My problem is not only I get asked for my assword a second time, but also (and only very recently) I do not get a prompt to save username and pasword. I have gone to yahoo help pages to try to figure it out, and I feel like I just run in circles. If one wants to send a question in, they have to answer half a dozen questions disclosing private information. And I pay yahoo $20/year. grrr
    If when you are trying to access your yahoo mail and you are being asked to enter both your username and password, but didn't sign out of yahoo, then indeed your are not being "kept signed in"

  • I converted aif files to mp3 files in I tunes so I could put them on a flash drive.  Now when I transfer an album to the flash drive I get both the aif files and the mp3 files in the flash drive.  How do I get the iTunes to only copy the smaller file?

    I converted aif files to mp3 files in iTunes so I could put them on a flash drive.  Now when I transfer an album to the flash drive I get both the aif files and the mp3 files on the flash drive.  How do I get the iTunes to only copy the smaller file?

    My suggestion to use the smart playlist was more along the lines of making one list with rules to show everything in your library as long as it was kind "mpeg" (= mp3).
    Smart playlists cannot be edited manually, only by changing the rules.  You can use them as the basis for another smart playlist though.
    There is added overhead to using smart playlists, so don't go overboard.

  • Hi, since i put some mp4 video or DVD files on my external hard drives final cut prox won't recognize my hard drives, i move all the video files and the problem still exist. somebody have the solution

    Hi, since i put some mp4 video or DVD files on my external hard drives final cut prox won't recognize my hard drives, i move all the video files and the problem still exist. somebody have the solution

    Did the back and forth between PC and Mac file systems screw up my files somehow?
    Yes. Unfortunately the move has almost certainly caused the loss of the resource data that would have been (invisibly) attached to those files.
    By adding a specific .mov extension after the fact you will have helped the Mac correctly identify the file type / asscoiations but also significantly you will have physically changed the filename eg from "clip" to "clip.mov". This means that when you come to reconnect the media in FCP , then FCP will be looking for files that no longer exist at the given path eg when specifically looking for our example clip called "clip" of course it won't find it.
    Try this as a test... first remove the ".mov" extension from one or more of the clips, select those clips and press Cmd-Opt-I to open an Info window for the selection. You'll see an "Open with:" popup and in that popup you should choose QuickTime Player (if its a Unix Executable then it will initially list Terminal as the appropriate app, you have to choose Other... then in the open dialog choose Enable > All Applications and then navigate to and select the Quicktime Player app as the appropriate app). After you've done that restart FCP and see if you can now reconnect to those clips.

  • I can't able to open the *.log files from the firefox browser and i need to open the file inside the frame

    If i open the log file from firefox browser it is not open and throws an error as
    "The address wasn't understood
    Firefox doesn't know how to open this address, because one of the following protocols (e) isn't associated with any program or is not allowed in this context."

    What type of log file?
    What is the file name?
    What is the file path or URI?

  • I keep getting old settings when I try to install Firefox again. I have deleted the program files and the profile, but the program is finding the old bookmarks and settings somewhere. How do I delete this information to make a clean installation?

    I have a corrupted bookmark file and want to totally delete any information on the bookmarks and settings. I have deleted any files which reference Mozilla and Firefox from my hard drive and have gone into the Registry and deleted all references to Firefox. But every time I reinstall Firefox the program finds this information. Is it in the sync folder and how do I clear out that folder?

    http://support.mozilla.com/en-US/kb/Backing+up+your+information

  • I've picked up a trojan. Neither norton or superantispyware can find the source. when i delete the "ad" files and the index.dat file in user/appdata/roaming/microsoft/windows/cookies and cookies/low they come back the next time i start firefox

    The "ad" file names look like they're randomly generated. They look like n6sxx8t5.dat and 038prmz7.txt. I get one for every pop-up ad that is generated. That pop-up ad is generally related to the type of web page i'm on when it triggers. it only occurs once per session. what i don't know is if it is also trying to capture any data i'm entering.
    like i said, i've deleted the text and index.dat files, but they keep coming back.

    Do a malware check with some malware scanning programs.<br />
    You need to scan with all programs because each program detects different malware.<br />
    Make sure that you update each program to get the latest version of their databases before doing a scan.<br />
    <br />
    * http://www.malwarebytes.org/mbam.php - Malwarebytes' Anti-Malware
    * http://www.superantispyware.com/ - SuperAntispyware
    * http://www.microsoft.com/windows/products/winfamily/defender/default.mspx - Windows Defender: Home Page
    * http://www.safer-networking.org/en/index.html - Spybot Search & Destroy
    * http://www.lavasoft.com/products/ad_aware_free.php - Ad-Aware Free
    See also:
    * "Spyware on Windows": http://kb.mozillazine.org/Popups_not_blocked
    * https://support.mozilla.com/kb/Searches+are+redirected+to+another+site
    If using the above listed scanners does not fix it or if you are blocked from installing those scanners then ask advice on one of the forums that specialize in malware removal mentioned in the <i>Popups_not_blocked</i> article.

  • How to see the log file on the Reports Application server?

    Hi,
    I am new to this Reports, please correct me if i am wrong or in the wrong forum, we have a java based front-end application which calls oracle reports to display requested reports. We are using oracle reports (9i/10g). I need work on a report, It is working fine if i provide parameters on the report itself, but if i run from the application it is not running properly. I want to see the parameters the application is sending to this report, is there any way that i can some log file or any other one on the report server?
    thanks,

    Thank you.
    When you fire the processes to the report server you
    can trace the report with the following parameter:
    RWCLIENT.EXE SERVER=my_server
    TRACEFILE=c:\my_trace.trc
    TRACEMODE=TRACE_REPLACE
    TRACEOPTS=TRACE_ALL

  • XMLForm Service message keep showing in the log file

    Hello,
    The message below keeps showing in the log file when the form is opening in Workspace ES. It keeps repeating like hundred times in the log file when the form is launching.
    It just happen to some forms but not to all forms. I tried to see if there is any thing strange in the form but could not find any thing wrong in the form nor any explaination about this error on Adobe site or Internet. Note that I am still using LiveCycle ES 8.2.1 with SP3.
    =============
    [10/18/11 13:46:22:644 CDT] 0000003e XMLFormServic W com.adobe.service.ProcessResource$ManagerImpl log ALC-XTG-102-001: [1712] Bad value: 'designer__defaultHyphenation.para.hyphenation' of the 'use' attribute of 'hyphenation' element ''. Default will be used instead.
    ============
    Can any one please advise.
    Thanks in advance,
    Han

    Yes,
    Upgrade it to SP1 with hot-fix. SUN has a very big bug in the 4.16 SP1 software . u have to apply that hot-fix otherwise you will in big bug in userpassword attribute and some other security issues.

  • SQL Server 2012 Reorg Index Job Blew up the Log File

    We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
    a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
    because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
    a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.

    I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
    Do you grow them afterwards, or do you let the application waste time on that during peak hours?
    I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
    I would say this is highly dubious.
    Erland Sommarskog, SQL Server MVP, [email protected]
    Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
    interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
    Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
    space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
    What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
    These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
    just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
    And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
    log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
    Restoring the database will also be faster.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Unable to debug the Data Template Error in the Log file

    Hi,
    I am unable to debug the log file error message Please can anybody explain me in detail where the error lies and how to solve the error.The log file shows the following message.
    XDO Data Engine ver 1.0
    Resp: 50554
    Org ID : 204
    Request ID: 2865643
    All Parameters: USER_ID=1318:REPORT_TYPE=Report Only:P_SET_OF_BOOKS_ID=1:TRNS_STATUS=Posted:P_APPROVED=Not Approved:PERIOD=Sep-05
    Data Template Code: ILDVAPDN
    Data Template Application Short Name: CLE
    Debug Flag: Y
    {TRNS_STATUS=Posted, REPORT_TYPE=Report Only, PERIOD=Sep-05, USER_ID=1318, P_SET_OF_BOOKS_ID=1, P_APPROVED=Not Approved}
    Calling XDO Data Engine...
    java.lang.NullPointerException
         at oracle.apps.xdo.dataengine.DataTemplateParser.getObjectVlaue(DataTemplateParser.java:1424)
         at oracle.apps.xdo.dataengine.DataTemplateParser.replaceSubstituteVariables(DataTemplateParser.java:1226)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:398)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:281)
         at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:251)
         at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:192)
         at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:222)
         at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:334)
         at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:236)
         at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:272)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:148)
    Start of log messages from FND_FILE
    Start of After parameter Report Trigger Execution..
    Gl Set of Books.....P
    Organization NameVision Operations
    Entering TRNS STATUS POSTED****** 648Posted
    end of the trns status..687 Posted
    currency_code 20USD
    P_PRECISION 272
    precision 332
    GL NAME 40Vision Operations (USA)
    Executing the procedure get format ..
    ExecutED the procedure get format and the Result..
    End of Before Report Execution..
    End of log messages from FND_FILE
    Executing request completion options...
    ------------- 1) PUBLISH -------------
    Beginning post-processing of request 2865643 on node AP615CMR at 28-SEP-2006 07:58:26.
    Post-processing of request 2865643 failed at 28-SEP-2006 07:58:38 with the error message:
    One or more post-processing actions failed. Consult the OPP service log for details.
    Finished executing request completion options.
    Concurrent request completed
    Current system time is 28-SEP-2006 07:58:38
    Thanks & Regards
    Suresh Singh

    Generally the DBAs are aware of the OPP service log. They can tell you the cause of the problem.
    Anyway, how did you resolve the issue?

  • Log file in the deploy service

    Hi ,
    I am trying to deploy a ear using deploytool.I am getting errors while deploying.I want see the complete log for the errors.
    The deployer_log file displays some errors and ....20 more 
    " For detailed information see the log file of the Deploy Service"
    I want to see the detailed information for the log.
    Where can i find this log.
    Thanks in advance
    Raju

    Hi Raju,
    You can find the deploy service logs together with all other J2EE Engine logs in the defaultTrace.trc files in /usr/sap/<SID>/<INSTANCE>/j2ee/cluster/server<N>/log.
    Best regards,
    Vladimir

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

  • Reshipping the log files to logical standby

    Hi All
    I am getting this error in the logical standby "ORA-01291: missing logfile" . The problem I had some wrong configuration in "standby_archive_dest" parameter in the logical database but I fixed it and now I need to reship the logfiles from primary to logical standby. I am using ASM in the primary and logical standby as well and I did NOT turn on the flash back in both side. Is there any way for me to reship the log files from the primary to the logical standby ?
    Thanks

    user12302159 wrote:
    Hi All
    I am getting this error in the logical standby "ORA-01291: missing logfile" . The problem I had some wrong configuration in "standby_archive_dest" parameter in the logical database but I fixed it and now I need to reship the logfiles from primary to logical standby. I am using ASM in the primary and logical standby as well and I did NOT turn on the flash back in both side. Is there any way for me to reship the log files from the primary to the logical standby ?
    ThanksIf you have set FAL_CLIENT and FAL_SERVER correctly, then the missing archives should be transferred automatically to standby database.
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10726/appconfig.htm#g635923
    Regards,
    S.K.

  • What are the log files created while running a mapping?

    Hi Sasi , I have a doubt who actually generates the " Log Events ". Is it the Log Manager or Application services ?

    Hi Anitha, The Integration service  will be generate two logs when the mapping runs 1) Session log  -- Has the details of the task ,session errors and load statistics..2) Workflow log -- Has the details of the workflow processing, and workflow errors..  The workflow log will be generated when the workflow started and the session log will be generated once the session intiated.For more detail Please refer the infa help docs... Normally the services will generate log  ex: IS and RS will log their activity...  The below process will happen the when the workflow inititated..[Copied fropm infa help docs] 1.#The Integration Service writes binary log files on the node. It sends  information about the sessions and workflows to the Log Manager.2.#The Log Manager stores information about workflow and session logs in the  domain configuration database. The domain configuration database stores  information such as the path to the log file location, the node that contains  the log, and the Integration Service that created the log.3.#When you view a session or workflow in the Log Events window, the Log Manager  retrieves the information from the domain configuration database to determine  the location of the session or workflow logs.4.#The Log Manager dispatches a Log Agent to retrieve the log events on each  node to display in the Log Events window.    ThanksSasiramesh

Maybe you are looking for