Any Oracle best practice/standards for inter-DataCente links for Oracle RAC

Hello Oracle Experts,
Am working for a customer to set up Oracle RAC architecture hosting SAP/Non-SAP applications per SLA levels(MC/BC/Standard) specs. Currently my network team needs calculation to arrive at whether we will go for a (1), (2) or (3) 10Gig links for inter DC (Data-Center) for Oracle RAC.. below is additional background:
•     Porting all client SAP/Non-SAP Oracle databases to new 2 data-centers.
•     There will be 10 blades (4x BL680s and 6x BL460s) in each DC (can scale-up/out later on).
•     Clusters architecture to support Extended/Stretched RAC cluster feature
•     Clusters 2-node each(1-datacenter1, 1-datacenter2) and nodes distributed across 2 x c7000 such that no cluster has more than one node in an enclosure.
•     Each node will have - 4 NIC ports ( 2 x public and 2 x private) , 2 dual-port HBA
•     Oracle ASM/ACFS (ASM Cluster File System), Voting Disk, OCR and Database files
•     the versions are Oracle 11g RAC, Oracle 10g RAC and Oracle 9i (for DataGuard/Standby) on RHEL 6 on Proliant Blades (x86) + BladeMatrix
My network colleagues considering using DWDM across the 2 DCs(given the lesser cost?). Am still looking around if there are any Oracle/industry-best practices around this and having a calculation to support that..
Many Thanks in advance..
Regards,
Abhijit

Hi ,
There are no specific set of steps / practices for batch loading contents to ucm . It would be very much dependent on how many contents does the user have to load to UCM and how well the server is configured in terms of performance .
You can get more details from the following documentation link : http://docs.oracle.com/cd/E21043_01/doc.1111/e10792/c02_settings009.htm
Thanks,
Srinath

Similar Messages

  • What oracle best practices in mapping budgeting to be implement at item

    Dear Consultant's
    Really i need you values Consultantancy
    What oracle best practices in mapping budgeting to be implement at item category level or item level
    I want to check fund at encumbrance account according to item level
    Case:
    I have there item category
    One is Computer's items
    Tow is printer's items
    Third is food's item's
    I want to implement my budget on item category level
    Example:
    I want my purchase budget for item with printer's type not exceed 30000USD
    And For item with type food's not exceed 45000usd
    How to map this in oracle application
    The modules implemented on my site
    (GL, AP, AR, INV, PURCHASING, OM)
    Please give me the oracle best practice that handle this case
    Thanks for all of you

    Hi,
    It is really difficult to have Budgetary Control on Inventory Items in Average Costing enviornment as you can have only one Inventory Account at the Inventory Organization level.
    You have to modify your PO / Requisition Account Generator to populate the Encumbrance Account in PO / Requisition based upon item category. Moreover, the "Reverse Encumbrance" flag in your Inventory Org needs to be unchecked so that the encumbrances are not revered when the goods are received.
    Gajendra

  • Best practice standard User Acess Test for WIN2012 AD

    What is the Best practice standard User Acess Test  for WIN2012 AD

    Hello,
    as before, add a computer to the domain and log on with a domain user account to the computer.
    You should be able from the client machine to open the sharedfolders on the DCseither with:
    \\DCName\sysvol
    \\DCName\netlogonor \\NetBiosDomainName\sysvol
    \\NetBiosDomainName\netlogon
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Any best practice/suggestion on giving Id's for UI Component

    Hi,
    I came to know that for better performance, id's on naming containers shall be less than 7 characters in length.
    What about UI Components other than container components?
    Is there any best practice available for giving Id's for UI Components and its length?
    Do we face any issue if we give ids with more than 7 characters (just to make the id meaningful one)?
    Thanks in Advance
    Raguraman

    a quotation from
    Oracle® Fusion Middleware Performance and Tuning Guide book
    11g Release 1 (11.1.1)
    E10108-02
    >
    The "id" attribute should not be longer than 7 characters in length. This is
    particularly important for naming containers. A long id can impact
    performance as the amount of HTML that must be sent down to the
    client is impacted by the length of the ids.

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Best Practice/Standard for Securing and Attaching Files in a Web Service

    Thanks in advance.
    Being new to Web Services as well as most of my team. I would like to know what is the best practice for transporting files via a Web Service. I know of several methods and one that seems to be the standard, but you can't really tell in this ever changing world of Web Services. Below are the options that I have found.
    1. MIME encoded the file and embed in the payload of the SOAP message
    2. SwA (SOAP with Attachments) which applies MIME attachments to SOAP. I think this is similiar to the way emails are handled.
    3. DIME (Direct Internet Message Encapsulation) similiar to MIME encoding but is more efficient
    4. MTOM (Message Transmission Optimization Mechanism) I really not understand this method, but it seems that this is the NEW standard. I just don't understand why.
    5. Utilize HTTPS and download the file from an accessible file server w/ a login id and password.
    Is there someone out there that understands this problem and can assist me in understanding the pros and cons of these methods? Or maybe there is a method that I'm overlooking altogether.
    Thanks

    JWSDP supports securing of attachments [1]and will soon support securing MTOM attachments too. [1]http://java.sun.com/webservices/docs/2.0/xws-security/ReleaseNotes.html

  • Best Practice / Standards to design a Scorecard in PL-SQL

    I am attempting to put together a Scorecard for my client who wants to track set of employees with different job role and title. Basically he identified several cetegories, criteria under those categories that he wants to score them by set weightages
    Example
    Category 1
    Productivity
    Category Weightage
    80%
    Criteria
    WorkTypeA Weightage 25%
    WorkTypeB Weightage 25%
    WorkTypeB Weightage 25%
    WorkTypeB Weightage 25%
    Category 2
    Quality
    Category Weightage
    20%
    Criteria
    Quality Type 1 50%
    Quality Type 2 50%
    He wants to rank each employee and rate then on a scale of 1 to 5, 1 being best. He also wants to rank by Role and Title as well
    My question...is there a standard methodology/template to design this in pl-sql? I can put something together but I have a feeling there should be existing best practice on this kind of scorecard design. Any ideas?
    Buzzer
    Message was edited by:
    vinny75

    Hi
    Thanks for the response. I dont have any model yet. All I have at this time, is a set of main categories, criteria under the category, weightage at the category level and criteria level. The Criteria they want to weigh on, is located in the datamart.
    I guess I could build a pl-sql packaged routine using the Analytic functions and start building snapshot tables holding the score.
    Is this how scorecard work is done?

  • Best practice to implement different Xcelsius dashboard for different users

    I'm implementing an Xcelsius dashboard that requires to show each individual user with different content (e.g. When a user logins in, the dashboard shows her name and job title, her performance and her subordinate's performance).  I'm just wondering what's the best practice to implement scenario like this?  Thanks.

    Hi Thomas
    What you are looking at is "Row Level Security" within BusinessObjects and the options you have are determined by what type of data you are reporting off of (relational data, OLAP data, BW data, etc.)
    For instance, if you are using relational data with a Universe you could setup a database table with the BusinessObjects username to correspond with their e-mail address or other unique identifier. From there, you could add security to your universe using the @variable('BOUSER')
    That way, any objects created off of the universe (whether it is a Crystal Report, Web Intelligence, BI Web Service, QaaWS, LiveOffice, etc.) will filter the data based on this security model. So any Xcelsius dashboard based on this underlying data will also be filtered.
    And that is just one of the options you have, depending on your data source.

  • Oracle Best Practices in 10g When Disabling NUMA

    We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?

    user10387007 wrote:
    We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?How are you using NUMA (Non Uniformed Memory Access)?
    NUMA can be implemented at CPU level - in which case CPU affinity becomes important. NUMA can be used across an Interconnect (e.g. SCSI over RDMA protocol).
    So it depends on what you mean by NUMA and how you are using it (and whether or not it is used by the Oracle s/w stack itself).

  • Oracle Best Practices / Guidelines regarding Cleaning TEMP files

    Hi folks,
    Can any one help me with a set of steps / guidelines or best practices to clean TEMP files from OBIEE servers (our PROD environment)?
    Does perhaps OBIEE take care of this for you automatically, how is that process happening?
    Thanks a lot for your time and attention and hope to hear from you soon.

    TEMPS files are deleted from the server once a user logs out. But there might be chance where the TEMP files does not get deleted automaticlly, when the user logs out of the system even before the TEMP file has been generated completely. In this case the temp files get stored in the server, and a bounce of services cleans up the files.
    The best practice would be to create a script to empty out the temp directory during the start up of the services.
    -Amith.

  • Best Practices to update Cascading Picklist mapping for Account record type

    1. Most of the existing picklist values name in parent and related picklist has been modified in external app master list, so the same needs to be updated in CRMOD.
    2. If we need to update picklist value, do we need to DISABLE the existing value and CREATE a new picklist.
    3. Is there any Best Practices to avoid doing Manual Cascading picklist mapping for Account record type? because we have around 500 picklist values to be mapped with parent and related picklist.
    Thanks!

    Mahesh, I would recommend disabling the existing values and create new ones. This means manually remapping the cascading picklists.

  • Is it best practice to use a dedicated link between NX7K pair for keep-alive?

    I have been using dedicated physical 10G link between pair of NX7k for keep-alive. Is it a best practice? It seems a waste of 10G ports because keep-alives does not need that much bandwidth.
    I'm thinking just configure a dedicated VLAN interface in the default VRF, and have it routed through other devices (fir example 6509 core switches) for keep-alive. Has anyone done that before? The goal is not to use dedicated 10G ports for keep-alive.
    Thanks a lot.

    gwhuang5398 wrote:I have been using dedicated physical 10G link between pair of NX7k for keep-alive. Is it a best practice? It seems a waste of 10G ports because keep-alives does not need that much bandwidth.I'm thinking just configure a dedicated VLAN interface in the default VRF, and have it routed through other devices (fir example 6509 core switches) for keep-alive. Has anyone done that before? The goal is not to use dedicated 10G ports for keep-alive.Thanks a lot.
    I agree that using a Tengig port for this purpose is not efficient. The whole idea of peer-keepalive is to have a way to avoid split brain scenarios in case of peer-link failure. For this reason the peer-keepalives should never cross the peer-link as it defeats the original purpose I just cited. Following are some viable options:
    You can use the N7K management ports for peer-keepalive functionality. If you have redundant supervisors then make sure you use an external switch (OOB switch for example) to patch the management ports.
    You can use dedicated interfaces just like you are doing right now but that would make more sense if you had Gig ports in my opinion.
    You can run peer-keepalives inband but you need to make sure this traffic is never routed over the peer-link. It is considered a good practice to use a dedicated VRF for this purpose.
    Atif

Maybe you are looking for

  • JAR files just won´t run, multiple systems, JRE & JDK installed.

    Hi! I´ve been looking on many different places for an answer for this issue: I have to run different JAR files for applications, one is from Apache ActiveMQ, and the other is a hash calculating app. I´ve read on many topics that if you get error "Fai

  • Unable to access the content of a Window (PopUp)

    Hi, I am trying to access the contents of a MovieClip I placed inside a PopUpWindow component, but can't figure how to do that. I have created the window with something like this: var SAedit_win:MovieClip = mx.managers.PopUpManager.createPopUp(_root,

  • Frequency Analysis on Adobe Audition CS5.5 for bioacoustics research

    Hi everyone, In need of some help. I research whale/dolphin sounds, and have previously used Raven Pro for analysis. Ive found Adobe Audition produces a much better spectrogram (think it must be the FFT algorithm) and allows me to visualize a lot of

  • Cool tip for saving web clips to home screen

    I didn't realize that this feature was available. Snapshots of a Web page can be captured and saved in a Web clip by zooming in on a certain section of a page and saving that specific view. So not only can you create a web clip, you can have it displ

  • Serial number re-registering

    I purchased Photoshop Elements 8 several years ago and registered the serial #. I recently had my hard drive crash, and when I reinstalled Photoshop on the new hard drive and tried to register the serial number, a response returned saying that number