Oracle Best Practices in 10g When Disabling NUMA

We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?

user10387007 wrote:
We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?How are you using NUMA (Non Uniformed Memory Access)?
NUMA can be implemented at CPU level - in which case CPU affinity becomes important. NUMA can be used across an Interconnect (e.g. SCSI over RDMA protocol).
So it depends on what you mean by NUMA and how you are using it (and whether or not it is used by the Oracle s/w stack itself).

Similar Messages

  • What oracle best practices in mapping budgeting to be implement at item

    Dear Consultant's
    Really i need you values Consultantancy
    What oracle best practices in mapping budgeting to be implement at item category level or item level
    I want to check fund at encumbrance account according to item level
    Case:
    I have there item category
    One is Computer's items
    Tow is printer's items
    Third is food's item's
    I want to implement my budget on item category level
    Example:
    I want my purchase budget for item with printer's type not exceed 30000USD
    And For item with type food's not exceed 45000usd
    How to map this in oracle application
    The modules implemented on my site
    (GL, AP, AR, INV, PURCHASING, OM)
    Please give me the oracle best practice that handle this case
    Thanks for all of you

    Hi,
    It is really difficult to have Budgetary Control on Inventory Items in Average Costing enviornment as you can have only one Inventory Account at the Inventory Organization level.
    You have to modify your PO / Requisition Account Generator to populate the Encumbrance Account in PO / Requisition based upon item category. Moreover, the "Reverse Encumbrance" flag in your Inventory Org needs to be unchecked so that the encumbrances are not revered when the goods are received.
    Gajendra

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • Best practice Forms 10g configuration setup and tuning

    Hi,
    We are currently depolying forms 10g from 6i client/server version. Users are experiencing Form hangups and hour glasses. This does not happen that often but can happen any time, anywhere in the app (users do inserts, updates and deletes and queries).
    Is there a baseline best practice configuration setup anywhere either in the Forms side or the AppServer side of things?
    Here is our setup:
    Forms 10g (9.0.4)
    Reports 10g (9.0.4)
    Oracle AppServer 10g (9.0.4)
    OS = RedHat Linux
    Client Workstations run on Windows 2000 and XP w/ Internet Explorer 6 or higher
    Average No. of users = 250
    Thanks for all your help

    Shutdown applications within the guest.
    Either power off from Oracle VM Manager or 'xm shutdown xxx' from the command line
    It is possible one or more files could be open when the shutdown is initiated.
    Have found at least one case of misconfigured IP which would have resulted in the disk access being via the 'Front End' interface rather than the Back End.
    Thanks

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Any Oracle best practice/standards for inter-DataCente links for Oracle RAC

    Hello Oracle Experts,
    Am working for a customer to set up Oracle RAC architecture hosting SAP/Non-SAP applications per SLA levels(MC/BC/Standard) specs. Currently my network team needs calculation to arrive at whether we will go for a (1), (2) or (3) 10Gig links for inter DC (Data-Center) for Oracle RAC.. below is additional background:
    •     Porting all client SAP/Non-SAP Oracle databases to new 2 data-centers.
    •     There will be 10 blades (4x BL680s and 6x BL460s) in each DC (can scale-up/out later on).
    •     Clusters architecture to support Extended/Stretched RAC cluster feature
    •     Clusters 2-node each(1-datacenter1, 1-datacenter2) and nodes distributed across 2 x c7000 such that no cluster has more than one node in an enclosure.
    •     Each node will have - 4 NIC ports ( 2 x public and 2 x private) , 2 dual-port HBA
    •     Oracle ASM/ACFS (ASM Cluster File System), Voting Disk, OCR and Database files
    •     the versions are Oracle 11g RAC, Oracle 10g RAC and Oracle 9i (for DataGuard/Standby) on RHEL 6 on Proliant Blades (x86) + BladeMatrix
    My network colleagues considering using DWDM across the 2 DCs(given the lesser cost?). Am still looking around if there are any Oracle/industry-best practices around this and having a calculation to support that..
    Many Thanks in advance..
    Regards,
    Abhijit

    Hi ,
    There are no specific set of steps / practices for batch loading contents to ucm . It would be very much dependent on how many contents does the user have to load to UCM and how well the server is configured in terms of performance .
    You can get more details from the following documentation link : http://docs.oracle.com/cd/E21043_01/doc.1111/e10792/c02_settings009.htm
    Thanks,
    Srinath

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • Oracle Best Practices / Guidelines regarding Cleaning TEMP files

    Hi folks,
    Can any one help me with a set of steps / guidelines or best practices to clean TEMP files from OBIEE servers (our PROD environment)?
    Does perhaps OBIEE take care of this for you automatically, how is that process happening?
    Thanks a lot for your time and attention and hope to hear from you soon.

    TEMPS files are deleted from the server once a user logs out. But there might be chance where the TEMP files does not get deleted automaticlly, when the user logs out of the system even before the TEMP file has been generated completely. In this case the temp files get stored in the server, and a bounce of services cleans up the files.
    The best practice would be to create a script to empty out the temp directory during the start up of the services.
    -Amith.

  • Best practice Forms 10g

    Hello,
    Where can I find the best practices after migrating Forms 6i to Forms 10g ???
    thanks in advance

    Hello,
    <p>You could start with this great white paper available on the OTN Forms page</p>
    Francois

  • Best practice for Smartview when upgrading from Excel 2003 to Excel 2007?

    Does anyone know the best pratice for Smartview when upgrading from Excel 2003 to Excel 2007?
    Current users have Microsoft Excel 2003 with Smartview 9.3.1.2.1.003.
    Computers are being upgraded to Microsoft Excel 2007.
    What is the best pratice for Smartview in this situation?
    1. Do nothing with Smartview and just install Excel 2007.
    2. Install Excel 2007 and then uninstall and reinstall Smartview
    3. Uninstall Smartview, Install Excel 2007, and then install Smartview
    4. Somthing else??
    Thanks!

    We went with option 1 and it worked out fine. Be aware that SV processes noticeably slower in Excel 2007 than 2003. Many users were/are unhappy with the switch. We haven't tested SV v11 yet, so I'm not sure if it has improved performance with Excel 2007 or not (hopefully it does).

  • Best Practice for DHCP when Anchoring to a Guest Wireless LAN Controller

    Hi all,
    I'm interested in the communities opinion in relation to DHCP provisioning when using auto-anchor/guest tunneling.
    As far as I can tell, one cannot use the internal DHCP on the anchor controller when using auto-anchor due to incompatibility between the auto-anchor feature and DHCP Option 82.
    The scenario is as follows:
    Guest controller is the anchor which provides Internet access to guests.
    There is a foreign controller which is configured to anchor to the guest controller.
    The internal DHCP server is configured on the guest anchor controller, therefore DHCP proxy must be enabled for DHCP to work.
    DHCP proxy enables Option 82.
    The guidlines for guest tunneling state that DHCP Option 82 isn't supported. (Ref: Deploying and Troubleshooting Cisco Wireless LAN Controllers - Ch14)
    So, the internal DHCP server requires DHCP proxy to be enabled; this in turn enables Option 82, which stops DHCP leases being made to clients connected to the foreign controller.
    Given that a guest WLC would normally be placed in a DMZ, the internal DHCP server may often be the only DHCP solution available.
    I look forward to hearing your opinions.
    Thanks
    Rhodri Jenkins

    There are a couple of options here if you need to get proxy disabled
    1) pinhole with an ACL that allows dhcp to pass your internal servers
    2) run dhcp on a switch, router, or firewall in the dmz
    3) if you are using a cab,e modem or dsl for the guest users, you can let that do the dhcp
    In general I've seen most of these in play, but I like option 2 myself
    Sent from Cisco Technical Support iPad App

  • Oracle best practice metrics

    Aside from security hardening and ensuring there is an effective backup/restore process in operation, what other issues should a network admin want assurances on for a healthy oracle database? What other metrics or risk areas would be of concern to IT/network management?

    Osama_mustafa wrote:
    You could start with small test, backup and restore a few files to see if the system itself works,Check the content of your backup archive also you need different approach to test tape drive and make sure it validSorry maybe a bit of confusion, I basically meant if you take security and backup/restore out of the equation, what other metrics would DBA's look for when trying to determine a health well configured/managed database server...

  • Oracle Best Practices Discussion Pros/Cons of using Synonyms

    Please share your experience given the Pros/Cons of Developing Enterprise Database Applications using public and private Synonyms.
    My recommendation to developers on my team is to avoid using Public Synonyms in their code and instead Fully Qualify the database object by schema owner.
    Pros: When you drop a schema, you do not drop the public synonyms that they created. Therefore if you have to use synonym, make it private and not public.
    Please share your experience!

    Fahd Mirza wrote:
    Well I rarely use public synonyms and that only case of db links. For example, I have a scenario, in which I have hooked up a MS SQLSERVER database with Oracle database through Heterogenous Services. I am accessing the SQLSERVER table in real time, through HS, in Oracle in real time, and then from this Oracle environment I have created db links to many other interested databases. In those interested database, I have created public synonyms over those dblinks.
    It's so transparent for the interested databases. But I have documented this whole configuration in great detail for any upcoming DBA, just in case I leave, or expire or anything. Sounds interesting - this might be worth 'cleaning' and publishing to OTN's Articles. If you are interested in pursuing this, you might want to contact Justin (Community Forum) or myself ([email protected])

  • BOBJ on Oracle Best Practices - Schema vs New DB

    Hi all,
    My company is installing Business Objects 4.1 as part of an overall (new) SAP project.  I am the Oracle DB for the project and new to the SAP/Business Objects world.
    My question is how do most people separate the different pieces of the overall Business Objects install on the database level?  I was thinking we would have one database with different schemas for Audit, CMS, BODS and Information Steward.  I've heard from our integrator that some companies will have a separate database for each. 
    Not knowing the system that well, adding separate databases seems unnecessary.  Where I would have some additional hoops to jump through if a restore was needed for just one of the schemas, it seems better then adding the additional DBs.
    Hopefully this makes sense.
    Thanks.

    Hi Michael,
    I agree with Mani's suggestion as recommended approach. However, It would depend from org to org.
    and if insisted to continue with one database then you would have to scale ORA DB accordingly considering trnsactional nature of each service highlighted (CMS, AUDIT, BODS)
    Since, these are transactional you would need to customize database while setup (DB buffer, Undo segments, Shared pool)
    I'm assuming you would be using 11g R2 on Linux Operating system(mostly preferred). Hence,
    how is Oracle DB deployed(single instance architecture or RAC) on which you would create DB schemas for above services?
    In either of the above case you should setup db user for BODS and CMS to conect to the ORA database in "Dedicated server connection" mode and not in "Shared server connection" mode. Hence, no latency provided the Server platform are in the same network or on same server. Audit could connect in "Shared server conection" mode.
    In ORA RAC deployment i don't forsee any issues as your transactions could be benefited by "Streams" feature incase of ORA instance failure.
    Starting in Oracle 11g Release you may be benefited from automatic PGA memory management can be implemented using
    • PGA_AGGREGATE_TARGET initialization parameter for PGA
    • MEMORY_TARGET initialization parameter combined total memory for SGA & PGA
    I may go on and on. Hope i was able to put some light here. Please refer Performance Optimization in BODS for more information regarding BODS that might also assist you.
    Regards,
    Sandeep

Maybe you are looking for

  • I updated my iphone tonight and now my texting is crazy.  When i text my daughter, it goes to my phone

    I updated my iphone tonight and now my texting is crazy.  When I text my daughter, the text comes to my iphone.

  • Do i need a separate log in for each itunes user?

    I have 5 people with Apple products in my house.  Do I need a separate user log on for each person and iTunes?

  • IWeb and Aperture: any updates since October?

    I've shared the difficulties that others have discussed in getting Aperture, iWeb and iPhoto to play nice together. I use Aperature now because I have so many photos, and I was having to rebuild my iPhoto cache every day (to the tune of 15-30 minutes

  • Query related to Dead Stock

    Hi, What is Dead Stock and How to determine the material, that is it Dead Stock or not??? If we run the Tcode MC50, it shows the Dead Stock data like: - Material...................Short text...................Dead stock value...................% ....

  • Pdf file - BLOB

    Hello All, To store a pdf file in a BLOB column. create table test id number primary key, docs BLOB create or replace directory doc_loc as 'c:\test'; insert into tkctsf15t values(1,empty_blob()) returning docs into dest_loc; dbms_lob.open(src_loc,DBM