Datafiles in swapping mode - for performance

Hi there,
One of the Senior DBAs told me that it is better to keep the datafiles in swapping mode, which means..
Suppose we need to create 4 Tablespaces, 2 for data files and 2 for index files and we have two drives called E, F. In this case he said, the performance will be increased if we prepare
E drive
Datafile_Tablespace_A (datafile TS no. 1)
Index_Tablespace_D (index TS for datafile no.2)
F drive
Index_Tablespace_B (index TS for datafile no.1)
Datafile_Tablespace_C (datafile TS no. 2)
According to him, Oracle works better in swapping mode, is it true? I was under the impression that index and datafile tablespaces should be built on separate drives.
Even though my quetions is in general, for reference - The OS we are using is windows 2003 server and parition is Raid-5 and the Oralce 10.2.0.1 version.
If anybody can clarify, I would be obliged.
Thanks

I'm going to default to one of Billy's responses:
{message:id=4060608}
>
Irrelevant as that does not change any of the storage fundamentals in Oracle. The database does not know or care what you use as storage system.. why should it? It is the kernel and disk/file system drivers job to deal with the actual storage hardware. From a database perspective, it wants the ability to read() and write() - in other words, use the standard I/O interface provided by the kernel.
I/O performance must not be factor. If it is, then you storage layer is incorrectly designed and implemented. Striping (RAID 0) for example must be dealt with at the storage layer and not at the application layer. Tablespaces and datafiles in Oracle makes extremely poor tools to implement striping of any sort. It does not make sense to attempt I/O balancing such as striping at tablespace and datafile level in Oracle.
So why then use separate tablespaces? You may need different tablespaces to implement different block sizes for performance.. but this is an exception to the rule. And you do not address actual storage performance here, but more how Oracle should manage the smallest unit of data in the tablespace.
So besides this exception, what other reasons? Could be you want to physically separate one logical data base (Oracle schema) from another. Could be that you want to implement transportable tablespaces.
All these requirements are quite explicit in that more than one tablespace is needed. If there is no such requirement, why then consider using multiple tablespaces? It only increases the complexity of space management.
Consider using different tablespaces for indexes and table data. In a year's time, you may find that the index tablespace has been oversized and the data tablespace undersized. You now have too much space on the one hand, too little on the other, and no easy way to "move" the freespace to where it is needed.
It is far easier to deal with a single tablespace - as it allows far more flexibility in how you use that for data and index objects, then attempting some kind of split.
So I will look for a sound and unambiguous technical requirement that very clearly says "multiple tablespaces needed". If not, I will not beat myself over the head trying to find reasons for implementing multiple tablespaces.>
There are also many other threads on this forum about separating data and indexes, try and search for them.

Similar Messages

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • SPAM in "perform adjustment mode" for a week

    Hello
    I have left SPAM in "perform adjustment mode" for a week or so. I wander is it anything needed to be checked now before runiining it further?
    (there are just HR localisation packages but stilll/where can I see if anything has to be done i.e. if some modification went throug in this week)
    Thank you in advance
    Jan

    Hi Jan,
    Execute SPAU, adjust the objects which are applicable.
    These adjustments needs to be done before you actually release the system to endusers.
    Refer to SCN document for SPAU adjustments
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/709abcf3-7d77-2c10-8d86-b9d2cb01cf18?overridelayout=t…
    Hope this helps.
    Regards,
    Deepak Kori

  • For Performance Mode

    HI ALL
    I HAVE SOME PROBLEM WITH PERFORMANCE MODE.MY COMPUTER DETAILS AS BELOW
    CPU:2.6GZ HT ENABLED
    RAM:2X256 KINGSTON KVR400X64C3/256 3-4-4 CL3
    HDD:40GB SEAGATE BARRACUDA ATA IV
    WINFAST FX5200 128 MB GRAPHIC CARD
    CREATIVE 5.1 DIGITAL SOUND CARD
    350W POWER SUPPLY(I DONT KNOW FULL DETAIL BUT I THINK POWER WORKING FINE )
    OTHER THINGS I THINK NOT IMPORTANT
    I AM WORKING PERFORMANCE MODE "SLOW".COMPUTER WORKING STABLE.
    AND IN SISOFTSANDRA MEMORY BENCHMARK IS:4220/4230
    BUT I TRIED TURBO MODE  MEMORY BENCHMARK IS:4700/4725
    BUT COMPUTER WORKED UNSTABLE.IM TRYING ULTRA-TURBO MODE BUT COPMPUTER IS NOT WORKING .I DONT HAVE ENOUGH INFORMATION ABOUT DDRAM TIMING AND SOMETHING LIKE THIS.WHAT CAN I DO ABOUT ULTRA-TURBO MODE .I WANT TO WORK ULTRA TURBO AND ALSO STABLE.
    I DONT KNOW WHAT CAN I DO.I THINK THIS ULTRA TURBO MODE FOR ONLY BEST RAMS LIKE HYPERX.CAN ANYBODY HELP ME ABOUT THIS MATTER.
     THANKS & BST RGDS

    To hit then highest overclock with your componets, I believe you want to do the following:
    Leave your ram settings at their SPD (Default)
    Have DOT turned OFF.
    Set your RAM voltage to at least 2.7.
    Set your performance mode to either turbo or ultra turbo.
    Lock your PCI/AGP Buses at 33/66 (or 34/67)
    Thereafter, incrementally increase your FSB. Once the rig becomes unstable, back off  until it becomes stable again and leave your settings there.
    I suspect that either your RAM or your PSU will be the limiting factor.
    By the way—Riddick rules all.
    Edit: Mods-- I do not mean to be presumptuous, but shouldn't the filter work only on whole words, rather than internal fragments? I.e. R-i-d-d-i-c-k (without the hyphens) was displayed as Rid****. I know the filters are necessary, &c, but that's a proper noun for the love of god. Sorry, I have a pet peeve about such things.

  • Where do changed data values goto if DB is in BACKUP mode for LONG PERIODs

    Where does oracle write if put the database in begin backup mode for LONG PERIODs. Lets say I issued a "ALTER DATABASE BEGIN BACKUP" command in a busy database and forgot about it for a long time.
    I understand that when the DB IS IN BEGIN BACKUP MODE, "the database copies whole changed data blocks into the redo stream." (Page 503 of 11.1 backup and recovery guide). But the redo stream is limited by the number of online redologs. After some time redologs also wont be sufficient for the changed data values after a begin backup is issued. I understand that there are archived redologs.
    Lets say there are 2 redolog groups in this database and Lets say 10 archive log files got generated since the ALTER DATABASE BEGIN BACKUP was issued.
    When i finally issue the "ALTER DATABASE END BACKUP" command, will Oracle sync the datafiles with the changed data blocks reading the data from these 10 archived log files ? What happens if i delete these archive redologs from the archive log destination. ??
    page 504 of 598 in the backup and recovery guide
    Caution : If you fail to take the tablespace out of backup mode,
    then Oracle Database continues to write copies of data blocks in
    this tablespace to the online redo logs, causing performance
    problems. Also, you receive an ORA-01149 error if you try to shut
    down the database with the tablespaces still in backup mode.
    it just says "performance problems", nothing more than that.*
    Any answers ? I am sure this question would have popped to some of you senior DBA people out there.

    user13076519 wrote:
    Where does oracle write if put the database in begin backup mode for LONG PERIODs. Lets say I issued a "ALTER DATABASE BEGIN BACKUP" command in a busy database and forgot about it for a long time. It writes just like it always does, plus it puts some extra in the redo log the [url http://oraclenz.com/2008/07/11/logging-or-nologging-that-is-the-question-part-ii/]first time a block is changed.
    >
    >
    I understand that when the DB IS IN BEGIN BACKUP MODE, "the database copies whole changed data blocks into the redo stream." (Page 503 of 11.1 backup and recovery guide). But the redo stream is limited by the number of online redologs. After some time redologs also wont be sufficient for the changed data values after a begin backup is issued. I understand that there are archived redologs. This appears to be a typo (incompleteness, really) in the backup and recovery guide.
    The redo stream is not limited by the number of of redo logs, only the volume of data. When a log fills up, it gets archived. If all the logs get filled up before the first one is finished archiving, the db will stall until the next redo becomes available. The only limit to archiving is disk space (and bandwidth if that is an issue, which it can be in some configurations).
    >
    Lets say there are 2 redolog groups in this database and Lets say 10 archive log files got generated since the ALTER DATABASE BEGIN BACKUP was issued.
    When i finally issue the "ALTER DATABASE END BACKUP" command, will Oracle sync the datafiles with the changed data blocks reading the data from these 10 archived log files ? What happens if i delete these archive redologs from the archive log destination. ??Archived logs are archived, Oracle only reads them in recovery. You do not understand archive logs, read the concepts manual.
    >
    page 504 of 598 in the backup and recovery guide
    Caution : If you fail to take the tablespace out of backup mode,
    then Oracle Database continues to write copies of data blocks in
    this tablespace to the online redo logs, causing performance
    problems. Also, you receive an ORA-01149 error if you try to shut
    down the database with the tablespaces still in backup mode.
    it just says "performance problems", nothing more than that.*Because it is overgeneralizing.
    >
    Any answers ? I am sure this question would have popped to some of you senior DBA people out there.Oh, you want to send me a gift for showing where in oracle it's documented? See [url http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:271815712711]here for something over a decade old.

  • PSE 7 Editor not responding. Stuck in thinking mode for eternity...

    Hoping for some help with this issue. Pleeeezee
    I start in Organizer, choose an image to edit, click on the quick fix (shoot in Raw), do some simple changes, then if I need more fixes I click on Open Image. The Editor window pops open and then the thinking mode for eternity is what happens next. Sometimes the image does finally appear but it could take several minutes. Has become increasing worse lately where images do not appear at all.
    Has 3GB memory, maxed out the memory usage in the performance. Just uninstalled PSE then re-installed it and still have the same issue.
    Not sure what else I can try. If anyone has ideas or has dealt with this same issue before, please share.
    Thanks!!
    -Martin

    Please provide more specific info about your computer - processor (single or dual), OS (XP, Vista or 7), 32-bit, video card, etc.
    You realize RAW file will require a lot of power to process.  Using a memory-stick (ReadyBoost) may help.
    Personally, I gave up trying to process RAW in PSE - switched to Lightroom 2.  Only use PSE for editing work beyond scope of LR2 - like Panaramas, merged exposure, etc.
    Dean

  • The permission granted to user "Domain\user" are insufficient for performing this operation(rsAccessDenied)

    Hello All, 
    I believe this is a very frequently-asked question in SSRS, maybe the more famous one. For many times, I solved it for others.
    But today, I spent one afternoon on this issue, unresolved. 
    My environment: SSRS 2008R2, DB in local default instance(SQL2008 R2)
    My windows account and one of my service accounts(launching my SSRS) are both in local admin group. 
    After configuring the  SSRS, in local server, open IE(run as administrator, using my domain service account) to access "http://localhost/reports". It pops this error: 
    The permissions granted to user 'Doamin\myServiceAccount' are insufficient for performing this operation. (rsAccessDenied)Get Online Help
    Going back to my desktop, Open IE using my windows account to access "http://servername/reports", seeing the the same error and saying my windows account doesn't have sufficient permission on that server
    On Server side, use SSMS to connect local report service, and try to check who is in "system administrator" in SSRS instance, it pops up the error as below, 
    The permissions granted to user '' are insufficient for performing this operation. (rsAccessDenied) (Reporting Services SOAP Proxy Source)
    If using IE to reach "http://localhost/reportserver"(Web Service page), both my windows account and service account work--it doesn't complain anything. 
    Checked all things I know, still seeing this error. Notice my windows account and my service account are both in local admin group.
    Anyone can share some thoughts on this?
    Derek

    Figured out finally.
    In rsreportserver, we put in our custom code of security control as below.
    <Security>
                    <Extension Name="Windows"Type="Microsoft.ReportingServices.Authorization.WindowsAuthorization,
    Microsoft.ReportingServices.Authorization"/>
                      <!--<Extension
    Name="Windows" Type="XXX.ReportingServices.Authorization.Authorization, XXX.ReportingServices.Authorization"/>-->
                </Security>
    When I flipped it back to native mode, it works. 
    Thanks all your replies.     
    Derek

  • "The permissions granted to user 'domain\username' are insufficient for performing this operation. (rsAccessDenied)

    HI,
    I am working on SharePoint 2013 and using Report Viewer webpart (imported from RSWebpart.cab file from SQL server 2008 R2) for showing SSRS reports. I have added Report Viewer webpart in page and done all configuration related to it like set Report
    Manager Url and Report Path in the webpart properties. But when i browse that page it is giving the below error -
    The permissions granted to user 'domain\username' are insufficient for performing this operation. (rsAccessDenied)"
    But when i run IE as 'Run as Administrator' and open the same page which contains the Report Viewer webpart, now i am able to view the report on the page and the error gone away.
    I am not sure what is happening here, what can be the reason for such unpredicable behaviour and what can be the work around for this. Every user can't open the IE in 'Run as Administrator' mode. So what can be the possible solution for this.
    Thanks in advance for the help!

    Solved.  In IE I went to the RS Home page, selected Detail View, put a check in front of every folder, went to Folder Settings and then added my domain user as a Browser in New Role Assignment. Reports work fine now.
    André

  • Adobe Reader XI, AppData Redirection & Protected Mode = Slow Performance

    I'd really appreciate any useful information that might allow the use of protected mode, but with reasonably good performance in the following scenario:
    Environment:
    - Windows 7 desktops with Adobe Reader XI
    - AppData is redirected to users' home folders
    - Desktops (5 - 10  in most cases) across a ~10 Mbps MPLS link from server hosting fome folder and the PDF docs being accessed.
    - Protected Mode Enabled (as recommended)
    Symptom:
    - It takes an inordinate amount of time (20 - 30 seconds) to open a tiny (100KB file) from a remote file server.  By comparison an Excel spreadsheet of similar size opens in 2 - 3 Seconds.
    Temporary Workarounds (either results in SUBSTANTIALLY better (and acceptable) performance):
    - Disable Protected Mode (for obvious reasons, we'd like to have this feature enabled)
    - Disable Appdata redirection (for many other reasons, Appdata redirection makes sense for our environment)
    Other notes:
    A network trace shows a large amount of SMB/CIFS traffic (5000 - 6000 packets) to/from the AppData folder when appdata is redirected to the users' home folders.
    One thought I had was whether or not there is a means to direct an app to use AppData\Local instead of Appdata\Roaming, or if there is a means to direct Adobe reader to use an entirely different (and local to the OS) folder for whatever it needs appdata for when in protected Mode.
    Any suggestions that don't include the workarounds already described are welcome.  Thanks!

    Protected Mode sounds like a good idea, but it causes so many different problems that even some Adobe support staff regularly recommend to disable it.  I am sure that I myself have recommended that a couple of hundred times here in the Reader forum.  Why have a security option that disables normal functions like opening documents or printing them...?

  • Different transaction codes useful for Performance Monitoring

    Hi Experts,
    Please can you guide me on this question, as to what are the different transaction codes useful for Performance Monitoring i.e. workload statistics and database statistics? What kind of statistics do each of these codes provide?
    Many thanks,
    Mithun

    Hi Mithun
    In performance issuses you need to look in terms of many ways that is..
    Workload analsys
    ST03N: Statistics Regards Locallly
    ST03G: Statistics Regards Golbally
    STAD: Individual Statistics Regards
    STATTRAACE: Individual Statistics Regards Trace
    ST07 : User Distribution
    Buffers and Memory
    ST02 : Buffers and Memory and swaps monitoring
    ST10: Table Acess
    OS Monitoring
    OS04: Locally monitoring
    OS07: Remotely monitoring
    OS01: LAN check
    DataBase Side
    ST04: Performance overview
    DB01: Exclusive locks
    DB02: Tables/Indexes
    BackgroundJobs monitor
    SM37
    other tcodes
    ST22: Abap Dumps
    SM12: Lock Entries
    SM56: NumberRange Buffers
    SU56: User Buffer
    all above transactions are need to monitor for Performance.
    Regards
    Bandla

  • Tracking mode for RMAN

    Hi,
    I want to use tracking mode for rman.
    I have created my tracking file to store modified blocks.
    The first time i run a backup incremental level 0 and after i create a table.
    The second backup is an incremental level 1 and all my datafiles are backuped.
    I d'ont understand why.
    Thanks

    How many datafiles do you have ?
    If you create a table:
    - Oracle writes into the dictionary (SYSTEM tablespace): related datafile is updated
    - Oracle writes into the tablespace used by the table: related datafile is updated
    - Oracle writes some undo information for the dictionary update in the UNDO tablespace: related datafile is also updated.
    And you may have some internal background transactions that could also update the SYSTEM and UNDO tablespaces.

  • HTTP Receiver for Performance Testing

    Dear Experts,
    I need to send the Asynchronous XI responses back to HTTP client for performance testing. Any suggestions how this can be achieved?
    Thanks,

    Hi,
    >>>.I'm not sure if HTTP receiver adapter can be used for this to send it back to the HTTP test client?
    no, it cannot htt test client can work in sync mode or async - async does not have any reponses
    >>>But, my requirement is to test the performance of 2 asynchronous Webservice performances. We send the SOAP request via HTTP
    but this is so simple - send HTTP call to SOAP and from SOAP you send it to the final receiver right ?
    at the end of receiver processing create a call to XI again and send via HTTP to where ever you want
    or to a file if you want to test receiver processing only
    I don't see any issue here...
    Regards,
    Michal Krawczyk

  • Partitioning for Performance

    Hi All,
    Currently we have a STAR Schema with ORDER_FACT and ORDER_HEADER_DIM , ORDER_LINE_DIM, STORE_DIM, TIME_DIM and PRODUCT_DIM.
    We are planning to partition ORDER_FACT for performance improvements both reporting and loading. We have around 100 million rows in ORDER_FACT. Daily we inserted around 1 million rows and update around 2 million rows.
    We are trying to come up with some good stratgies and we have few questions..
    1) Our ORDER_FACT does not have any date columns except INSERT_DATE and LAST_UPDATE_DATE , more of timestamp columns. ORDER_DATE would be the appropriate one but we do not store it in fact. We have ORDER_DATE_KEY which is surrogatekey of TIME_DIM.
    Can a range partition (monthly) still be performed ? ( I quess we need a ORDER_DATE column in our fact )
    If somebody has handled this situation in some other way , any guidance will be helpful.
    2) Question below is assuming - we have a partitioned ORDER_FACT on ORDER_DATE.
    Currently we are doing a merge (Update/Insert) on ORDER_FACT. We have a incremental load (only newly inserted or updated rows from source) are processed.
    Update/Insert is slow.
    Can we use PEL (Partition Enabled loading ) and avoid merge (Update/Insert) ?
    PEL is fine for new rows , since it replaces empty partition in target with a loaded partition from source . How to handle updation and insertion of rows in partition which has existing rows?
    Any help on these would be helpful.
    Thanks,
    Samurai.

    Speaking from our experience, at some point you need to build your fact rows so you need an insert/update prior to PEL anyway, and you would need your partitions closely matched to your refresh frequency for it really to be effective.
    So what we have done is focus on the "E" part of ETL.
    Our remote source database is mirrored on our side via Streams. This mirrors into a local copy that we can run various reports/ processes/ queries against without impacting production.
    We also perform a custom aply that populates a second local copy of the tables, but these ones are partitioned daily and are used for our ETL. So, at the end of the day we have a partitioned set of data that contains only the current status of rows that have changed over the day. Now, of course, this is problematic for ETL because you need to have all of the associated information with those changes in order to do your ETL.
    (simple example, data in a customer's address record changes. Your ETL query undoubtably joins the customer record and the address record to build your customer dimension row. But Streams only propogates the changed address record so you wouldn't have the customer record in that daily partition for your join)
    So, we have a process that runs after the Streams aply is finished that walks the dependency tree and populates all dependant data into the current daily partition, so - at the end of our prep process we have a partitioned set of data that holds a complete set of source tables where anything has changed across any dependencies.
    This gives us a small, efficient daily data set to run our ETL queries against.
    The final piece of the puzzle is that we access this segment via synonyms, and the synonyms are pointed at this day's partition. We have a control structure that manages the list of partitions and repoints the synonyms prior to running the ETL. The partition loading and the ETL synonym pointing are completely decoupled so, for example, if we ever needed to suspend our ETL to get a code fix in place we can let the partition loading move ahead for a day or two and then play catchup loading the partitions in sequence and be confident that we have each end-of-day picture there to use.
    By running our ETL against only the changed data, we acheive huge efficiencies in query performance. And by managing the ETL partitions, we don't incur the space costs of a second full copy of the source as we prune out the partitions once we are satisfied with the load at the end of a month (with full backups of course in case there is ever a huge problem to go back and correct).
    Now for facts, of course, we expect these to be insert only. Facts shouldn't change. For dimensions we use set based fail over to row based (target only), with a couple specified to be Row Based Target Only as they are simply too large to ever complete in Set Based Mode.
    Yes, this is a bit of a convoluted process - exacerbated by our need to have a full local copy for some reporting needs and the partitioned change copy for the datamart ETL, but at the end of the day it all works and works well with properly designed control mechanisms.
    Cheers,
    Mike

  • Request: AHCI/IDE controller mod for MSI EX610 BIOS v.3.09

    Hoping to eliminate my windows 7 installation problem mentioned in https://forum-en.msi.com/index.php?topic=141925, I would like to request an AHCI/IDE and SATA controller (are they the same???) mod for MSI EX610 (latest BIOS Version is v.3.09), like the one that had been previously done for EX600:
    Quote from: Svet on 28-May-09, 21:04:10
    Purpose of the MSI EX600 Notebook 5.09 BIOS Mod:
    * Rebuild option to control Sata controller mode per user request: http://forum-de.msi.com/index.php?page=Thread&threadID=90392
    Would it be too much to ask to have swapped Fn/LCtrl buttons at the same time? (would it be possible to build this on the previous mod you've already done to swap Fn/LCtrl keys of EX610 at https://forum-en.msi.com/index.php?topic=123070.0 ?
    Quote from: Svet on 31-December-08, 02:18:09
    Purpose of the EX610 3.09 BIOS Mod:
    * Reverse/swap the notebook keys functions of the "Fn" key with Left "Ctrl" key.
     {E.g "Fn" key will act as "Ctrl" key, and "LCtrl" will act as "Fn" key}
    Thanks for all your help and support.
    Donated as requested.
    Best wishes

    Quote from: Bas on 02-October-11, 03:44:06
    AHCI/IDE is not an issue in this, Windows7 supports both.
    The shutdown must have some other reason.
    Vista was working beautifully on this machine. It has to be some driver /BIOS issue. Can however someone send me the modified BIOS, just to make sure. I would be very thankful.
    Also, might be I need to preload some drivers for Windows7 to be able to finish smoothly? If so - were I can get the chipset drivers for Win7/Vista?
    I still assume that as the original poster disappeared after he got the modified BIOS, his problem was solved with that.
    Would be very thankful for any help.

  • Request: BIOS/EC mod for EX300

    Hi there,
    I have the 1.06 Bios/EC versions installed on my EX300.  I am interested in finding a mod which will allow swapping the Fn/Ctrl buttons.
    Thanks,
    dtan

    Quote from: roelm12001 on 12-March-11, 20:21:07
    sir please send me the BIOS/EC mod for EX300. thanks in advance!
    Hello,
    The request can be proceed on donation https://forum-en.msi.com/index.php?topic=134259.0
    Once done, the request will be e-mailed to you [up to 24 hours max]
    Best Wishes,
    Svet

Maybe you are looking for