Need best pratices advices

Hey guys,
Anyone can share with me the best pratices for the setup of an oracle database. I know that the amount of redo, grouping, file system layout, etc.. depend on the size of your BD. So to help here is the spec of my BD
oradata : 200GB
change rate : 50k/s (I got that by dividing the size of my archive redolog by the amount of time between the first and last archlog).
This is a standard database (not OLTP or Data Warehouse) use to store client information
My RPO (Recovery Point Objective) is 30 minutes
Some quick question
1. How should I layout the file system
2. How many redo/group/size
3. How many control file, where shoud I put it
4. How I should setup the log switching
Anyway doc, quick, don't want to read a 300 pages oracle document :-) This is why I'm looking on your knowledge
Thanks
Edited by: Sabey on 9-Feb-2011 8:01 AM

Sabey wrote:
Ok a bit more information.
Storage : SAN, RAID 5 disk onlySince it's SAN, the RAID 5 (which is generically bad for performance in any update environment) will have minimal adverse effect (because the RAID 5 is hidden by massive cache). Just try to spread the data files across as many disks as possible.
Oracle works best for datafiles on 'SAME' (Stripe and Mirror Everything). Spread the data files across all possible disks and mix data and index to try to get randomization.
No ASMPity. A lot of potential transparency will be side-stepped.
OS: Solaris 10 on a M4000, (2 SPARC 2.1GHz, 4 core each), 16GB RAMFinally some meat. ;-)
I assume Enterprise Edition, although for the size, the transaction rate proposed, and for the configuration, Standard Edition would likely be sufficient. Assuming you don't need EE-specific features.
You don't talk about the other things that will be stealing CPU cycles from Oracle, such as the app itself or batch jobs. As a result, it's not easy to suggest an initial guess to memory size. App behaviour will dictate PGA sizing, which can be as important as SGA size - if not more so. For the bland description of app you provide, I'd leave 2GB for OS, subtract whatever else required (app & batch, other stuff running on machine) and split the remaining memory at 50/50 for SGA and PGA until I had stats to change that.
>
Like I said, I espect a change rate of 50k/s, is there a rule of thumbs for the size of redo log, the amount, etc.. No bulk load, data is entered by people from a user interface, no machine generated data. Query in read for report but not a lot.Not too much to worry about then. I'd shoot for a minimum of 8 redo logs, mirrored by Oracle s/w to separate disks if at all possible, and size the log files to switch roughly every 15 minutes under typical load. From the looks, that would be (50k/s * 60 sec/min * 15 min) or about 50M - moderately tiny. And set the ARCHIVE_LAG_TARGET to thrum at 15 minutes so you have a predictable switch frequency.
>
BTW, what about direct I/O. Should I mount all oracle FS in that mode to prevent the use of OS buffer cache?Again, this would be eliminated by using ASM, but ... here is Tom Kyte's answer confirming direct IO http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4159251866796
Your environment is very very small in Oracle terms. Not too much to fuss over. Just make sure you have a decent backup/recovery/failover strategy in place and tested. Use RMAN for the BR and either DataGuard (or DBVisit for Standard Edition)

Similar Messages

  • Need best practice advice on RAC 2 nodes configuration/setup

    I need to implement 2 RAC nodes with linux redhat 5.
    Right now I did ifconfig, and it shows:
    ifconfig | grep "^[eb]"
    eth0 Link encap:Ethernet HWaddr 18:03:73:EE:7E:7F
    eth0:1 Link encap:Ethernet HWaddr 18:03:73:EE:7E:7F
    eth0:2 Link encap:Ethernet HWaddr 18:03:73:EE:7E:7F
    eth3 Link encap:Ethernet HWaddr 18:03:73:EE:7E:81
    eth3:1 Link encap:Ethernet HWaddr 18:03:73:EE:7E:81
    eth4 Link encap:Ethernet HWaddr 18:03:73:EE:7E:83
    eth5 Link encap:Ethernet HWaddr 18:03:73:EE:7E:85
    I did not see any bond0 or bond 1.
    I think oracle recommend privet interconnect to be bonded. We use CTP-BO , so I want to know should I request to bond the interface plus service?
    What is the command to test that the interface are setup right?
    Thank you for all the response in advance.

    Which version are you setting up and what are the requirements? IF you are discussing the use of NIC bonding for high availability beginning in 11.2.0.2 there is a concept of "High Availability IP" of HAIP as discussed in the pre-installation chapters,
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/prelinux.htm, section 2.7.1 Network Hardware Requirements.
    In essence, using HAIP eliminates the need to use NIC bonding to provide for redundancy.

  • Need a help / advice & guidance whether to switch on my Career in SAP-PM from core industries after spending my 09 years of experience in Core field that to in Maintenance.

    Hello Adviser /expert,
    Need a help / advice & guidance whether to switch on my Career in SAP-PM from core industries after spending my 09 years of experience in Core field that to in Maintenance.
    As now i m thinking to do SAP-PM certified course from authorized SAP partner in India that to Pune.
    So any one can suggest authorized SAP partner in India that to Pune.
    My introduction (About myself): - I had done my Diploma in Mechanical and had total 09 years of experience in Mechanical Maintenance field that to in different manufacturing company such as Steel, Auto Ancillary & Cotton plant.
    Whether its right decision to change my career from Core sector to SAP-PM..??
    Is there very good scope in SAP-Pm for me after 09 years of Core Maintenance field..???
    Guide me
    Warm Regard
    Ravin

    Ravindra,
    SAP PM is very niche Module, very in demand module, at the same time, being niche, getting into it and getting 1 implementation is also difficult.
    your decision of joining SAP authorized training is the best option, as a certified consultant, you have more chances to get a break as fresher and you can continue working on it, else it would waste of your intellectual energy.
    you just search sap.com training or email them or chat with them, they will give u the training center details,
    but very less training classes are available. Try you will get lucky soon

  • Linux Native Multipath and ASM Best Pratices questions

    Hallo,
    I 'd like to know your opinions about some questions I have:
    I am using linux native multipath without asmlib and I wonder:
    1-
    Is it mandatory/best pratice to partition (by fdisk) device-mapper luns before using them to build an ASM DISKGROUP, or Oracle asks to
    partition them because asmlib works better on partition? In other words, is there any issues to use directly /dev/device-mapper/mpath1 or I
    have to use /dev/device-mapper/mpath1p1 with 1 MB offset?
    2-
    Is it best to give the proper user/group to mpath lun's by rc.local or by udev rules? Is there any difference?
    Please , write me what do you have experienced..
    Thanks and bye

    ottocolori wrote:
    Hallo,
    I' m trying to have a clearer picture of it, and as far as I know:
    1 -
    Assuming you need to use the whole disk,
    Partitioning it is mandatory only if you use ASMLIB as it works only on partitioned disk.
    Yes you need to partition the disk first before presented to ASMLib
    ottocolori wrote:
    2-
    There is no need to skip first cylinder, or at least, I can't find official infos about that.
    What do you think about? TIANo need in linux platform to skip 1st cylinder, If I remember correctly you'd need to skip 1st cylinder in solaris as there is bug
    Cheers

  • What Photoshop Fits my needs Best?

    I need some good advice. I want to purchase a the right photoshop software for my needs. I am an avid amateur photographer using a Nikon D80 and shooting in NEF (Nikon RAW) format. Which software is the best, CS4 Extended, Lightroom 2 or some other one. Please advise and if possible a short explanation as to which one wopuld be very greatful and help me in making a decision. I run windows2000.
    Thanks

    The problem that you face is mentioned up-thread. To get the latest ARC (Adobe Raw Converter) for you camera, you will need a later version of PS. Unfortunately, your OS will not run those. Kind of a Catch-22, with variations.
    Now, I run Win2K on one older machine and love the OS. Still, for work with newer cameras, I ignore it, or the above reasons.
    It is probably time to relegate that computer to the background, as I have done. I use it only for some peripherals, that just do not run on newer OS's, but clients still want some stuff from those old SyQuest disks and SyJet cartridges. Also, I have hundreds of DAT's, that will only play on the old Veritas software on that box. Just bit the bullet and get a new box with Win7, and a newer PS.
    Good luck,
    Hunt

  • Hi, I need help and advice. Basically me and my ex partner both had iphones and synced it with the same computer under the same ID. We split i have a new laptop and now it keeps asking for the old ID or it'll erase my apps bought on theold account.

    Hi, I need help and advice. Basically me and my ex partner both had iphones and synced it with the same computer under the same ID. We split up and now im trying to get all my apps and info onto my new laptop with a new account but it keeps asking me for the old apple ID which she is still using and she changed the password. i tried backing it up but still nohing. When i try to back up purchased items being apps etc its keeps asking for the old one. help

    See Recover your iTunes library from your iPod or iOS device. But you'll still need the password.
    Once you have the computer authorized to use the account she could change the password again to stop you buying apps on her card (assuming it's not on yours!). It would lock you out of upgrading them too but they should work unless she uses the deathorize all feature.
    It depends on how amicable the split is...
    tt2

  • Need best Architecture design for the following scenario

    Hi Experts,
    I need best architecture design for the following scenario:
    Sender: Mail
    Receiver: if sender body contain u201DApproveu201D, then call SOAP adapter, If SOAP adapter execute successfully, then  send Mail(SOAP adapter executed successfully) and SMS .  So receiver is SOAP, Mail and SMS.
    My current approach issue:
    Three message mapping:
    Mapping1: mail to SOAP
    Mapping2: mail to Mail
    Mapping3: mail to SMS
    In interface determinant>u201DSelect all three operation mappingu201D> u201Cselect order at runtimeu201D.
    Issue in current approach: For me first SOAP adapter should complete successfully, after that only my Mail and SMS operation mapping should execute:
    But problem is before success of SOAP adapter engine, Mail and SMS mapping completed or integration engine executed.
    Note: There is possible to fail the SOAP request in adapter engine.
    Kindly help me --> u201CAm I going correct way or want to change the architectureu201D?
    Thanks for advance!!
    Regards, Kumar

    What do you mean by successful execution of soap call?  Are you talking about Successful response (happy flow) from the soap call instead of application error ? Then based on the response you want to decide sending mail and sms.  How big call is your soap ?  If your soap interface is very simple I can give the other possible way.
    Sender sends the message, use mapping  to read the content and then do soap call in the mapping itself and after seeing the soap response decide about two receivers (mail and SMS).  If your soap call is very simple one, you can go for it. Otherwise I would not recommend this. The reason is you loose some visibility regards to monitoring during soap call in the mapping.
    Other option is go for CCBPM.  Here you receive the message, use send step to soap interface which is sync step. Then after seeing the response create another block with fork steps each for two senders mail and sms. If response bad then dont proceed or execute the next block. Simply use exeception or control step to jump out from the block,

  • I am going to place a 30 minutes dvd film on a website for one of my clients. Before converting tje old vhs-film to dvd I must know what format will suit the needs best. HTTP, Flash, Quicktime or something else?

    I am going to place a 30 minutes dvd film on a website for one of my clients. The film was shot 15 years ago and is in vhs-format. Before I will have the film converted into a dvd-format I want to know what format will suit my needs best. HTTP, Flash, Quicktime or something else? I will make it easy for myself placering the dvd film on the site and for the vistors who wish to look att the film.

    The DVD format (MPEG-2) isn't appropriate for Web use.
    Use QuickTime and H.264 video codec with AAC audio. Size the file at 320X240 (best fit for your old file).

  • ADF Faces & BC: Best pratices for project layout

    Season greetings my fellow JDevelopers!
    Our software group has been working with ADF for around 5 years and through the years we have accumulated a good amount of knowledge working with JDeveloper and ADF. Much of our current application structure has been resurrected in the early days of JDeveloper 10 where there were more samples codes floating around then there were "best pratice" documentation. I understand this is a subjective topic and varies site to site, but I believe there is a set of common practices our group has started to identify as critical to streamlining a development process(reusable decorated ui components, modular common biz logic, team development with svn, continuous integration/build, etc..). One of our development goals is to minimize dependency between each engineer as everyone is responsible for both client and middle layer implementation without losing coding consistency. After speaking with a couple of the aces at the last openworld, I understand much of our anticipated architectural requirements are met with JDeveloper 11(with the introduction of templates, declarative components, bounded task flows, etc..) but due to time constraints on upcoming deliverables we are still about an year away before moving on with that new release. The following is a little bit about our group/application.
    JDeveloper version: 10.1.3.4
    Number of developers: 7
    Developer responsibilties: Build both faces & bc code
    We have two applications currently in our production environments.
    1.A flavor of Steve Muench's dynamic jdbc credentials login module
    2.Core ADF Faces & BC application
    In our Core ADF Faces application, we have the following structure:
    OurApplication
         -OurApplicationLib (Common framework files)
         -OurApplicationModel (BC project)
              -src/org/ourapp/module1
              -src/org/ourapp/module2
         -OurApplicationView (Faces project)
              public_html/ourapp/module1
              public_html/ourapp/module2
              src/org/ourapp/backing/module1
              src/org/ourapp/backing/module2
              src/org/ourapp/pageDefs/
    Total Number of Application Modules: 15 (Including one RootApplicationModule which references module specific AMs)
    Total Number View Objects: 171
    Total Number of Entities: 58
    Total Number of BC Files: 1734
    Total Number of JSPs: 246
    Total Number of pageDefs: 236
    Total Number of navigation cases in faces-config.xml: 127
    Total Number of application files: 4183
    Total application size: 180megs
    Are there any other ways to divide up this application? Ie: module specific projects with seperate faces-config files/databindings? If so, how can these files be "hooked" together? A couple of the aces has recommended that we should separate all the entity files into its own project which make sense. Also, we are looking into the maven builds which should remove those pesky model.jpr files that constantly gets “touched”. I would to love hear how other groups are organizing their application and anything else they would like to share as an ADF best pratice.
    Cheers,
    Wes

    After discussions over the summer/autumn by members of the ADF Methodology Group I have published an ADF Coding Standards wiki page that people may find useful:
    [http://wiki.oracle.com/page/ADF+Coding+Standards]
    It's aimed at ADF 11g and is intended to be a living document - if you have comments or suggestions please post them to the ADF Methodology google group ( [http://groups.google.com/group/adf-methodology?hl=en] ).

  • Best pratices for ODI interfaces

    I was wondering how everyone is handling the errors that occur when running an interface with ODI.
    Our secinaro:
    We have customer data that we want to load each night via ODI. The data is in a flat file and a new file is provided each night.
    We have come across an issue where a numeric field had data that was non numeric in it ... so ODI created a bad file ... with the bad record in it .... and an error file with the error message. We then had some defined constraints that forced records into the E$ table.
    My question is how does everyone handle looking for these errors. We would like them to just bereported to one place ( an oracle table ) so when the process runs we can just look at the one table and then act on the issues.... as shown above ODI puts errors in two different places... DB ones in a flat file and user defined in the E$ tables.....
    I was wondering if anyone has come across this issue and might be able to tell me what was done to handle the errors that occurr .. or what the best pratices might be for handling these errors?
    Thanks for any assistance.
    Edited by: 832187 on Sep 29, 2011 1:18 PM

    If you have got few fields affected by conversion problem you could try insert an ODI constraint or you could modify LKM to load bad file if present.

  • Need best practice configuration document for ISU CCS

    I am working on ISU CCS project. i need  best practice cofiguration document for
    Contract management
    Collections management
    Invoicing
    Work Management as it relates to ERP Billing.
    Thanks
    Priya
    priyapandey.sapcrmatgmailcom

    Which version are you setting up and what are the requirements? IF you are discussing the use of NIC bonding for high availability beginning in 11.2.0.2 there is a concept of "High Availability IP" of HAIP as discussed in the pre-installation chapters,
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/prelinux.htm, section 2.7.1 Network Hardware Requirements.
    In essence, using HAIP eliminates the need to use NIC bonding to provide for redundancy.

  • Best pratices for the customizing about the performance

    Hello,
    I would like to know the list of the best pratices for the customizing BPC NW 7.5 about the performance.
    Best regards
    Bastien

    Hi,
    There are few how to guides on SDN which will give you a basic idea on script logic. Apart from this, you can refer to the help guide on help. sap.com.
    The templates might also effect the performance. The number of EVDRE functions, the number of expansion dimensions, the number of members on which expansion takes place will effect the performance. A complex formatting in the template will also effect.
    Hope this helps.

  • Best pratices for GATP by SAP

    Hi all,
    I am not able to download best pratices for GATP by SAP from http://help.sap.com/bp_scmv250/BBLibrary/HTML/ATP_EN_DE.htm. Seems the documents are removed. Can some one who already downloaded share the same with me?
    Also can you provide working links for best pratices for SNP and PP/DS?
    Thankyou,
    Ram

    Hello Ram
    Please check this wiki page - it has good content and some useful links
    APO-GATP General Information - Supply Chain Management (SCM) - SCN Wiki
    and
    Find out more on RDS solution for GATP at : http://service.sap.com/rds-gatp
    if you search http://service.sap.com/bestpractices you will find a documents about best practice in GATP.  The help.sap.com for GATP is a good resource too to start with as well.
    Also you can read below blog written by me
    Global Available To Promise (GATP) Overview
    Hope this will help
    Thank you
    Satish Waghmare

  • Need an expert advice in Implementing continuous delivery across environments

    Hi All,
    I need an expert advice/solution in implementing continuous delivery in our project. (PF Screenshot)
    Deployment goes like this.. 
    Dev checks in -> TFS Builds - > RM Invokes -> using PS/DSC -> Copies the binaries to DIT environemnt's IIS virtual dir -> Manual replacement of Web.config files (since we have many environments)- > App runs
    PS/DSC does following things : Copies binaries using X copy , Uploads SP excel reports , Executing Dacpac scripts etc ..
    Now the chanllenge earlier faced was wrt Web.Config files was, for every DIT environment many configurations, conn strings would change. In order to overcome this , Implemented Web.config transformations( like Web.DIT1.Config .... Web.DIT6.config in my solution
    explorer. 
    Added a custom MSBuild script in my .csproj file and before triggerig the build defination I 'd change in Process tab- > Basic - > Configuration = DIT1 (based on requirement) and triggers . )
    It perfectly deploys to the DIT 1 without manually replacing WEb.config files. 
    1) Now the blocker for me, Can we achieve this without Web.config transformations ? Since I don't want to expose the sensitive date in config files is there any way to implementing the same ?
    2) I wanted to have continuous deployments from DIT to SIT and from SIT to UAT etc , How can we achevive the same ?
       I m planning to implement the same with web.config tokenizing in RM?
    Looking for some expert advice in achieving the same?
    Many Thanks ,
    Abraham Dhanyaraj

    HI Abraham,
    Instead of having web config transformations to set config values for each environment do the following.
    1. Have one transformation config say for example web.release.config which will contain tokens understood by RM (starting and ending with double underscore) for each config value.
    <log4net>
    <appender name="FileAppender" type="log4net.Appender.FileAppender">
    <file value="__log4netLogFile__" xdt:Transform="Replace" />
    </appender>
    </log4net>
    <ClientsSection>
    <Clients xdt:Transform="Replace">
    <add Id="__ClientId1__" Secret="__SecretCode1__" RedirectUrl="__RedirectUrl1__"></add>
    <add Id="__ClientId2__" Secret="__SecretCode2__" RedirectUrl="__RedirectUrl2__"></add>
    </Clients>
    </ClientsSection>
    <appSettings xdt:Transform="Replace">
    <add key="MinCaps" value="2" />
    <add key="MinSymbols" value="1" />
    <add key="MinNumerics" value="1" />
    <add key="MinLength" value="6" />
    <add key="MaxLength" value="" />
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="PreserveLoginUrl" value="true" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="WebPortalName" value="__WebPortalName__" />
    <add key="TimeZoneFrom" value="Pacific Standard Time"/>
    <add key="TimeZoneTo" value="Mountain Standard Time"/>
    </appSettings>
    The TFS builds can generate output with above formatted configs with web config transformation.
    2. Then in RM server templates change the config files to have values depending on the environment. You can do this using a custom deployment component or default xcopy deployment component to set parameters.
    3. Then in the release template set the config values.
    Cheers!
    Chaminda

  • Vmware Data recovery Best pratices

    Hi,
    I am looking for Vmware Data recovery Best pratices, I find everywhere on the internet the following link :  http://viops.vmware.com/home/docs/DOC-1551
    But this is not a valid link and I can't find it anywhere...
    Thanks

    Hi,
    I am looking for Vmware Data recovery Best pratices, I find everywhere on the internet the following link :  http://viops.vmware.com/home/docs/DOC-1551
    But this is not a valid link and I can't find it anywhere...
    Thanks

Maybe you are looking for