1 large lun or many smaller luns

Hi,
I'm running Oracle 10g/11g. I'm NOT using ASM (that isn't an option right now). My storage is IBM DS3500 with IBM SVC in front of it.
My question is, is it better to have 1 large lun or many smaller luns for the database (assuming its the same number of spindles in both cases)?
Are there any limitations with queue depth..etc. I need to worry about with the 1 large lun?
Any info would be greatly appreciated.
Thanks!

Hi,
You opened this thread on ASM forum and you are not using ASM Filesystem (???????)....what gets dificult to answer your questions.
Well...
First you need to consult the manual/whitepapers/technotes of filesystem that you will use to check what are the recommendations for the database using this filesystem.
eg. using JFS2 on AIX you can enable CIO...
Another point:
Create large luns can be useful and can be not. All depends on the characteristics of your env.
e.g: I believe is not good placing 2 databases with different characteristcs of access/troughput in same filesystem. One database can cause performance issue on others database if theys share same Lun.
I particularly dislike Large Luns to an environment that will store several database .... I usually use Large Luns for large databases, and yet without sharing the area with other databases.
My thoughts {message:id=9676881} although it is valid for ASM.
I recommend you read it:
http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#PFGRF015
Regards,
Levi Pereira

Similar Messages

  • Few large nodes or many small nodes

    Hi guys,
    In general, what option is better to implement a RAC system; few large nodes or many small nodes?
    Say we have a system with 4 nodes of 4 CPU and a system with 8 nodes of 2 CPU. Will there be a performance difference?
    I understand there won't be a clear cut answer for this. But I'd like to learn from your experiences.
    Regards,

    Hi,
    The worst case in terms of block transfer is 3-way, doesn't matter if you have 100 nodes a single block will be accessed at max in 3 hops. But there are other factors to consider
    example if you're using FC for SAN connectivity I'd assume trying to connect 4 servers could cost more than 2 servers.
    On the load let's say your load is 80 (whatever units) and equally distributed among 4 servers each servers will have 20 (units). If one goes down or shutdown to do a rolling patch then load of that will be distributed among other 3 so these will have 20 + 20/3 = 26.666. Imagine the same scenario if there was only two servers then each will have 40 and if one goes down one server has to carry the entire load. So you have to do some capacity planning interms of cpu to decide if 4 nodes better or 2 nodes better.

  • Dividing One large Image to many smaller images

    Dear Java developers,
    I have 1 Large Image, and I want to divide it
    into many smaller images, But I don't see
    any Java API to make it...
    Anybody can help me ?
    Thanks in Advance,

    I'd guess using BufferedImage and subimages thereof is faster than filtering it. Although it depends much on the implementation of the original image source, and its caching strategies. But it's pretty certain that when you are creating a BufferedImage which is appropriate for you current color model, you avoid most conversions which may be needed when rendering directly from an image source.
    Having said that, the image source and filtering way may even use more memory and cpu than the buffered image way. At least temporary. But the image source is allowed to release almost all memory associated with the image, down to retaining only the original URL.
    In simpler words:
    - With BufferedImage you can be quite sure how much memory it will need. Add up the space needed for the Raster and auxiliary data and there you are. It won't change much over time. But it's not present in JDK 1.1.
    -- Simple, predictable and modern.
    - ImageSource is pretty much opaque in how much memory it will use. However, it's interface allows dropping most resources and re-creating them on demand. Of course, you'll know what it does when you're implementing it yourself. Which I tend to do from time to time.
    -- Complex (flow control), opaque but present in JDK 1.1.
    Your mileage may vary. There would be no challenge in programming if there were no tough decisions to be made ;-)
    /kre

  • Simultaneous hash joins of the same large table with many small ones?

    Hello
    I've got a typical data warehousing scenario where a HUGE_FACT table is to be joined with numerous very small lookup/dimension tables for data enrichment. Joins with these small lookup tables are mutually independent, which means that the result of any of these joins is not needed to perform another join.
    So this is a typical scenario for a hash join: the lookup table is converted into a hashed map in RAM memory, fits there without drama cause it's small and a single pass over the HUGE_FACT suffices to get the results.
    Problem is, so far as I can see it in the query plan, these hash joins are not executed simultaneously but one after another, which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?
    Please note that the parallel execution of a single join at a time is not the matter of the question.
    Database version is 10.2.
    Thank you very much in advance for any response.

    user13176880 wrote:
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?Correct. But why do you think this is an issue? Because of this:
    which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.That is (should not be) true. Oracle does one pass of the big table, and then sequentually joins to each of the hashmaps (of each of the smaller tables).
    If you show us the execution plan, we can be sure of this.
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?Yes there is. But again you should not need to resort to such a solution. What you can do is use subquery factoring (WITH clause) in conjunction with the MATERIALIZE hint to first construct the cartesian join of all of the smaller (dimension) tables. And then join the big table to that.

  • How to split a large PDF into many smaller PDFs

    In my wanderings, I couldn't find an answer to this question. Thus my post.
    I have a large, 20 page, pdf. I'd like to split that pdf into 10 two page pdfs. How can I do it?
    The 20 pager is a pdf of a number of account statements. The account statements varying in length from 1 to 3 pages. I'd like to split the pdf so I end up with one pdf per account.
    In advance, thank you for your help

    Hi.
    It's simple: open the PDF, go to File, Print, and in the print dialog select Copies & Pages, enter the range you want, click PDF/Save as PDF.
    Good Luck.
    MacMini G4 1.25GHz 1GB   Mac OS X (10.4.9)  

  • Creating and organizing one large document from many small "forms"

    I'm organizing a symposium and attendees submit abstracts.
    I have set up a "form" using Word to distribute to people to fill out and named all the fields, example: "Title," "Firstauthor," "Body." Etc.
    They are going to email me completed forms.
    I was hoping that it'd be easy to make a drag/drop Script so that I can just drag and drop these files to create big document that would organize the abstracts & attendees into a program.
    Word has the "catalog" creation option in the merge-manager. But it uses some sort of tab delimiting scheme for acquiring its data when it's going to be in the form of various fields and the field-names.
    At my disposal, I also have File Maker Pro, but doesn't really seem to be able to do what I want to. I don't want to manually enter information (hundreds of attendees).
    Could I make a script that would:
    tell FileMaker to open a database and create a new entry
    tell word to get from a field "FirstName" and copy to clipboard
    tell filemaker to paste from clipboard to cell "firstname" of the new entry
    ....etc. with the other the fields .... and so on? to create the database in file maker. and then in Word, the merge-manager has a user-friendly interface to merge data from a FileMaker database and use the create a catolog feature. Is this too convoluted?
    Any suggestions on the best route? any ideas? I don't think what I'm trying to do is all that unusual.
    I've never written an applescript, but I have used them and I read about the language. I am generally a quick learner .... I just need to be pointed into the best plan of attack or know what the capabilities are.
    Powerbook G4   Mac OS X (10.4.8)   OfficeX and File Maker Pro
    Powerbook G4   Mac OS X (10.4.8)  

    Hi Ettor,
    Firstly, if the document is password protected, then I don't know if it could be done. Tatro's documents probably aren't. The first step is to unprotect the fill-in form document with ui scripting:
    tell application "Microsoft Word" to activate
    tell application "System Events"
    tell process "Microsoft Word"
    tell menu bar 1
    tell menu "Tools"
    delay 1
    click menu item "Unprotect Document"
    end tell
    end tell
    end tell
    end tell
    The document needs to be unprotected for macros to work. from there you can run a saved macro that sets the Save Preference, Save data only for forms. I named my recorded macro "ChangeFormPref". The macro could probably save the file also, but I wanted a simple macro. To run the macro, I found this on the Internet somewhere:
    do Visual Basic "Application.Run \"ChangeFormPref\""
    At this point, when you save the document, it's saved as text with AppleScript. Here's the entire script with no error checking:
    set dp to (path to desktop as string)
    set fs to (dp & "new.txt")
    tell application "Microsoft Word" to activate
    tell application "System Events"
    tell process "Microsoft Word"
    tell menu bar 1
    tell menu "Tools"
    delay 1
    click menu item "Unprotect Document"
    end tell
    end tell
    end tell
    end tell
    tell application "Microsoft Word"
    activate
    do Visual Basic "Application.Run \"ChangeFormPref\""
    delay 1
    save front document in fs
    end tell
    The delays may not be necessary, except for the one that waits for Word to activate. Here, I just placed the new.txt file on the desktop for testing.
    Next, AppleScript could easily concatenate the files creating data for a database. I would probably use the new.txt file as a temporary file, read that file, concatenate to a main file, clear the temp file, rewrite to it with Word, etc.. It might be faster though to create all the files first with some naming convention.
    I wasn't sure if Tatro was coming back, but am glad someone may use it.
    Note that Tatro is using Word X.
    Edited: I should give a warning that if you unprotect document and protect it again you lose the data. reprotecting seems to clear the form.
    gl,

  • Large servlet or several small ones??

    I am building a servlet for a web application which is becoming quite large. I am beginning to wonder about the resources that this will use on the server.
    Does anyone have any views on wether it would be better to split the servlet into several smaller or keep one large do it all.
    cheers
    chris

    I read these question and answers, and I'm sure small servlets are a better programming way, but my question is what is faster?
    I mean one big servlet need time to load and initialize but this is only for the first time, many small servlets or a framework need time to instantiate objects at every call.
    Am I wrong?

  • Alu 2s fails at copying many small files

    Hi,
    when I transfer big files I get nice speeds and have no problems around 80MB/S)
    But then I wanted to backup my mail directory that contains many small (few kb) email files - around 40.000
    Speed drops to like 5MB/S. I know it's supposed to drop with small files but this seems a bit extreme.
    When copying is done, the LED keeps flashing.
    When I try to copy a larger file afterwars, the speed is around 1mb/s.
    The LED keeps blinking even when no files are transferred.
    When I force unmount the drive and reconnect, many files are filled with null's, so obviously it didn't complete the writing process properly.
    I waited over 30mins but the LED keeps blinking, software unmount doesn't work in this state.
    Is this like, a known bug? It's really nerving.
    First the annoying virtual CD drive and now this.
    I'm very disappointed.

    Hi
    Try to defragment the external HDD
    You can use the internal Windows Disk Defragmenter which can be found in
    All Programs -> Accessories -> System tools

  • Pros and cons between the large log buffer and small log buffer?

    pros and cons between the large log buffer and small log buffer?
    Many people suggest that small log buffer (1-3MB) is better because we can avoid the waiting events from users. But I think that we can also have advantage with the bigger on...it's because we can reduce the redo log file I/O...
    What is the optimal size of the log buffer? should I consider OLTP vs DSS as well?

    Hi,
    It's interesting to note that some very large shops find that a > 10m log buffer provides better throughput. Also, check-out this new world-record benchmark, with a 60m log_buffer. The TPC notes that they chose it based on the cpu_count:
    log_buffer = 67108864 # 1048576x cpuhttp://www.dba-oracle.com/t_tpc_ibm_oracle_benchmark_terabyte.htm

  • PLL Library Size - Few Larger Libraries Or More Smaller Libraries

    Hi
    I am interested in people's thoughts on whether it is better to have fewer larger libraries or more smaller libraries. If you do break into smaller pll libraries, should the grouping be that some forms will not use modules in library, ie do not have to attach to all forms. For common modules that all forms require access to, do you achieve anything in having many little libraries, rather than on larger library?
    Is it correct that pll libraries are loaded into memory at run time when form is loaded and library is attached?
    What are issues to consider here
    Thanks

    Hi Till,
    My honest opinion...do not merge the libraries. Switch between them using iPhoto Library manager and leave it at that.
    A 22 gig library is way too big to run efficiently.
    Lori

  • TS4436 Seems to me this is a bunch of self serving crap. I have many small cameras that don't do this, in the same specs.

    Seems to me this is a bunch of self serving crap. I have many small cameras that don't do this, in the same specs.

    gdgmacguy,,
             Let me help you a bit with the english: "As a fellow user here I can tell you that no one cares what you are saying".
    Please let me know if the translation is not correct.
    The camera *****. I'm not holding it wrong.

  • Many small purchases, or one big one?

    Hello, I'm actually really new to the credit world and just got two cards, a Chase Freedom and a Discover IT for Students. I want to get started building my credit and increasing my credit limit. I realize it's a slow process, (and maybe getting two cards at once wasn't the best idea) but I want to make sure I don't needlessly slow down this process.
    My main question is, are many small purchases in a month better than just one big one, or does it even matter?For Example…
    Chase Freedom CL: $1200
    Discover IT CL: $1000I was hoping to use these cards to make payments on my tuition. If every month I only spend a $200 payment from my Discover card, is that better or worse than just buying 20 items at $10 apiece?Again though, the ultimate goal is to increase my CL and CS.

    Great cards to start on. I wish I had gotten two in college. I just graduated and got my second. Like others have said, the amount of charges doesn't matter as long as you keep your overall and individual util at or under 30% which you would be doing at the charges you're describing. if need be, don't shy away from making multiple payments in a month to keep your util low while using the cards as much as you like (responsibly, and within your need/ability of course) while earning the rewards. Enjoy building. You're definitely off to a good start!

  • How do I divide a large catalogue into two smaller one on the same computer?

    How can I divide a large catalogue into two smaller ones on the same computer?  Can I just create a new catalogue and move files and folders from the old one to the new one?  I am using PSE 12 in Windows 7.

    A quick update....
    I copied the folder in ~Library/Mail/V2/Mailboxes that contains all of my local mailboxes over to the same location in the new account. When I go into mail, the entire file structure is there, however it does not let me view (or search) any of the messages. The messages can be accessed through the Finder, though.
    I tried to "Rebuild" the local mailboxes, but it didn't seem to do anything. Any advice would be appreciated.
    JEG

  • Point-in-polygon performance  -large polygon w many vertices

    Dear Everybody,
    I was experimenting with the performance of point-in-polygon queries for very different polygon layers. My experience was that queries can be extreme slow, if the polygons are large and have many vertices (e.g., the administrative boundary of a state).
    I could also explain, why this is so:
    *because of the large area, geographic indices do not help: after the first filtering step (the filtering on the basis of the indices) the immediate result  is still very large, and
    * the large number of potential hits after the first filtering step have to be processed in a - due to the many vertices - very processor intensive second filtering step.
    That is, there are two causes making the second filtering step very expensive.
    Could you please comment on this? Does anybody have any experiences?
    Thanks in advance!

    Hi Gergely,
    Thanks for your suggestion, I am thinking about that too. But the polygon I am using is dynamically built by user when they use the application. They choose some lines, and then I build a buffer around those lines based on some parameters user input, and it could get complicated since it varies all the time. I was thinking maybe there is a parameter in sdo_buffer function, which should allow to reduce the number of vertices used to build the buffer polygon? kind of stumped in this problem. maybe I should open a TAR to request better solution. A query which sometimes takes a hour to finish is obviously not acceptable, although it is not always the case. :-(
    Tim

  • Big photo made of many small that-photo-parts

    Hi,
    Sorry if my question is stupid - I am new to Apple and iLife. Today I have received iLife 06 ad from apple by email, and there is shown a big photo of Golden Gates at San Francisco (I think), which is made of many small photos. How can I do the same with my photos with iLife? Or I should use any other software?
    Thanks for your answers.
    PowerBook G4   Mac OS X (10.4.6)  

    What you are seeking is software to construct a photo-mosaic. I'm sure there are several applications you could find that can do the job. You might want to start with the freeware MacOSaiX. It's pretty straightforward as to choosing your source image (the image that will be constructed from other photo elements) and the source(s) for the mosaic elements. It may take some time to get a feel for the "tile size" and number of tiles to use to contruct the final image however. Start small and experiment until you achieve a result you like.
    You can download MacOSaiX from the author's .Mac site at <http://homepage.mac.com/knarf/MacOSaiX/> or by searching for MacOSaiX on VersionTracker <www.versiontracker.com> or other software repositories. If you like the program and enjoy using it, please consider a donation to the author in recognition of his efforts.

Maybe you are looking for

  • Multiple instances of a subVi to display data

    What is the best method for creating and using a subVi that was created specifically for displaying results? I created a subVi for displaying stress (tension & compression). I started with a Numeric Control/Vertical Pointer Slider which I changed to

  • Raw/nef not visible in organiser

    Hi, I just upgraded from Elemets 7 to Elements 10 (since Win 7 and Elements 7 are not fully compatible). I installed full version Elements 10 and updated this directly, ADR 6.7 is installed. In the organiser the raw/nef (Nikon D5000) are not visible,

  • Concept of back flushing

    what is the concept of back flushing?

  • I just downloaded Moz. FF, and lost my" restore program" XP. where did it go?

    Due to issues with McAfee Total Protection, and their Virtual Technician as problem was not resolved I removed it. I then found out that I could not access my Banking Web sites. My Cable supplier could not help, finally a Bank Tech. told me to downlo

  • Put Problems

    I need help. I have a site I have had up and running for awhile. Yesterday, I created a new page that I needed. When I click put to add it to the site - I get this error - File activity Incomplete - 1 or more files or folders were not completed. So t