Oracle Best Practices Discussion Pros/Cons of using Synonyms

Please share your experience given the Pros/Cons of Developing Enterprise Database Applications using public and private Synonyms.
My recommendation to developers on my team is to avoid using Public Synonyms in their code and instead Fully Qualify the database object by schema owner.
Pros: When you drop a schema, you do not drop the public synonyms that they created. Therefore if you have to use synonym, make it private and not public.
Please share your experience!

Fahd Mirza wrote:
Well I rarely use public synonyms and that only case of db links. For example, I have a scenario, in which I have hooked up a MS SQLSERVER database with Oracle database through Heterogenous Services. I am accessing the SQLSERVER table in real time, through HS, in Oracle in real time, and then from this Oracle environment I have created db links to many other interested databases. In those interested database, I have created public synonyms over those dblinks.
It's so transparent for the interested databases. But I have documented this whole configuration in great detail for any upcoming DBA, just in case I leave, or expire or anything. Sounds interesting - this might be worth 'cleaning' and publishing to OTN's Articles. If you are interested in pursuing this, you might want to contact Justin (Community Forum) or myself ([email protected])

Similar Messages

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • Pros & Cons of Using SAP PI Interfaces for Report Generation

    Hi Guru's
    I have a Scenario's like
    I have to generate a customized report in SAP with the main data's available in SAP ECC and some required data available in the Legacy System.
    I want to know the Pros & cons of using SAP PI RFC/Proxy adapter interface to get the data from the legacy system for each time the user execute the report in SAP ECC.
    Thanks in Advance

    There are couple of "dimensions" to consider in your PI interface design. For exemple when you are running the sizing exercise (Since we are considering adding a net new interface), you will need to capture specific information about new interface.i.e. S/A, adapters, frequency, avg payload size. etc..Note that the last two attributes will be hard to size properly in this case since you can't predict how frequent the end user will run the report which will impact the latency time required to pull the data. Latency will affect the user experience as visible side affect and definitely the SLA for other interfaces running at the same time.
    On the other hand, the data you are trying to retrieve from the legacy won't be used for transactional purpose but for the end user to pull KPIs from the system which can affect ECC as well. You may end up doing lot hot fixes for your report (Assuming that the report is a medium complexity code)
    There are other factors to consider but let's consider these are the major one.
    Cheers,
    F

  • What oracle best practices in mapping budgeting to be implement at item

    Dear Consultant's
    Really i need you values Consultantancy
    What oracle best practices in mapping budgeting to be implement at item category level or item level
    I want to check fund at encumbrance account according to item level
    Case:
    I have there item category
    One is Computer's items
    Tow is printer's items
    Third is food's item's
    I want to implement my budget on item category level
    Example:
    I want my purchase budget for item with printer's type not exceed 30000USD
    And For item with type food's not exceed 45000usd
    How to map this in oracle application
    The modules implemented on my site
    (GL, AP, AR, INV, PURCHASING, OM)
    Please give me the oracle best practice that handle this case
    Thanks for all of you

    Hi,
    It is really difficult to have Budgetary Control on Inventory Items in Average Costing enviornment as you can have only one Inventory Account at the Inventory Organization level.
    You have to modify your PO / Requisition Account Generator to populate the Encumbrance Account in PO / Requisition based upon item category. Moreover, the "Reverse Encumbrance" flag in your Inventory Org needs to be unchecked so that the encumbrances are not revered when the goods are received.
    Gajendra

  • What are pros/cons of using xmarks vs. firefox sync?

    Used Firefox for a long while. Bookmarks morphed to Xmarks, which in turn has been acquired twice. I continue to use the newer Xmarks. In process of setting up a new computer, found that Firefox has it's own bookmarks sync.
    Like to keep things as simple as possible. Does Firefox sync do all that Xmarks does? What are pros/cons of using xmarks vs. firefox sync?

    I am not in a business environment, just my home. So, I don't know your specific requirements. I operate a dual G5 xserve, a dual quad-core mac pro, 6+ Apple MacOSX computers of varying types and several other linux servers plus a couple of Windows machines. If it were me, I would get two mac pro's running MacOSX Server and use one for backup of the other. The xserve RAID is too expensive from my point of view. I copy data from the xserve to several MacOSX computers with firewire drives for my backup and I have every computer on a ups. (I am assuming that they will not all fail at once!
    I run Tiger servers and would suggest that you significantly test Leopard before committing to it.
    Hope that helps.

  • Is Adobe Connect part of Adobe Creative Cloud? Are there any best practices ideas from people who use Connect and Creative Cloud?

    Is Adobe Connect part of Adobe Creative Cloud? Are there any best practices ideas from people who use Connect and Creative Cloud?
    I have an Adobe Connect account and I'm are also in the early stages of developing a webinar. I am looking for any tips and advice from anyone who uses both of these services.

    As the £27, was an introductory offer. Upon the completion of one year, the price will change to the normal creative cloud cost which is at £46.88. However if you have the previous versions of the creative suites like CS 3, 4, 5, 5.5 or the CS 6. You can avail the offer at £27.34 per month incl. VAT. However this Requires annual commitment; billed monthly.

  • TKL MDS 9000 Best Practices Discussion

    TKL MDS 9000 Configuration best practice discussion forum
    This is a discussion forum for TKL and other customers who would like to ask for questions around best practices followed for configuring the MDS 9000 platform a core in Data Center storage networks.
    The experts Joe Kastura and Venkat Kris  : Technical Leads from Advance Services team will be answering your questions.
    (Please post your question to the chat session).  

  • PRO & Cons of using treasury management

    Hi,
    I am trying to implement the treasury management, could someone tell me the PRO & Cons of using treasury management?
    thanks,
    Liang

    Hi,
    Based on the business requirement of the client you can decide upon using the Treasury module.If your client is into Banking or Investment field,then you can certainly go ahead with implementing Treasury as the daily treasury process within a company comprises a wide range of transactions, from determining the current liquidity based on bank account balances (cash position) and open receivables and payables (liquidity forecast), through to manually entering planned payment flows (advices) and carrying out cash concentration (the concentration of several bank account balances onto one target account).
    The main aim is to
    1)Ensure sufficient liquidity for all payment obligations that become due.
    2)Ensure that the incoming and outgoing payment flows are optimally controlled and monitored.
    For additional information on benefits of implementing treasury ,u can refer to the following link:
    https://forums.sdn.sap.com/click.jspa?searchID=27289061&messageID=7420616
    Hope this is useful.

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Any Oracle best practice/standards for inter-DataCente links for Oracle RAC

    Hello Oracle Experts,
    Am working for a customer to set up Oracle RAC architecture hosting SAP/Non-SAP applications per SLA levels(MC/BC/Standard) specs. Currently my network team needs calculation to arrive at whether we will go for a (1), (2) or (3) 10Gig links for inter DC (Data-Center) for Oracle RAC.. below is additional background:
    •     Porting all client SAP/Non-SAP Oracle databases to new 2 data-centers.
    •     There will be 10 blades (4x BL680s and 6x BL460s) in each DC (can scale-up/out later on).
    •     Clusters architecture to support Extended/Stretched RAC cluster feature
    •     Clusters 2-node each(1-datacenter1, 1-datacenter2) and nodes distributed across 2 x c7000 such that no cluster has more than one node in an enclosure.
    •     Each node will have - 4 NIC ports ( 2 x public and 2 x private) , 2 dual-port HBA
    •     Oracle ASM/ACFS (ASM Cluster File System), Voting Disk, OCR and Database files
    •     the versions are Oracle 11g RAC, Oracle 10g RAC and Oracle 9i (for DataGuard/Standby) on RHEL 6 on Proliant Blades (x86) + BladeMatrix
    My network colleagues considering using DWDM across the 2 DCs(given the lesser cost?). Am still looking around if there are any Oracle/industry-best practices around this and having a calculation to support that..
    Many Thanks in advance..
    Regards,
    Abhijit

    Hi ,
    There are no specific set of steps / practices for batch loading contents to ucm . It would be very much dependent on how many contents does the user have to load to UCM and how well the server is configured in terms of performance .
    You can get more details from the following documentation link : http://docs.oracle.com/cd/E21043_01/doc.1111/e10792/c02_settings009.htm
    Thanks,
    Srinath

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • Best Practices on OWB/ODI when using Asynchronous Distributed HotLog Mode

    Hello OWB/ODI:
    I want to get some advice on best practices when implementing OWB/ODI mappings to handle Oracle Asynchronous Distributed HotLog CDC (change data capture), specifically for “updates”.
    Under Asynchronous Distributed HotLog mode, if a record is changed in a given source table, only the column that has been changed is populated in the CDC table with the old and new value, and all other columns with the exception of the keys are populated with NULL values.
    In order to process this update with an OWB or ODI mapping, I need to compare the old value (UO) against the new value (UN) in the CDC table. If both the old and the new value are NOT the same, then this is the updated column. If both the old and the new value are NULL, then this column was not updated.
    Before I apply a row-update to my destination table, I need to figure out the current value of those columns that have not been changed, and replace the NULL values with its current value. Otherwise, my row-update will replace with nulls those columns that its value has not been changed. This is where I am looking for an advise on best practices. Here are the possible 2 solutions I can come up with, unless you guys have a better suggestion on how to handle “updates”:
    About My Environment: My destination table(s) are part of a dimensional DW database. My only access to the source database is via Asynchronous Distributed HotLog mode. To build the datawarehouse, I will create initial mappings in OWB or ODI that will replicate the source tables into staging tables. Then, I will create another set of mappings to transform and load the data from the staging tables into the dimension tables.
    Solution #1: Use the staging tables as lookup tables when working with “updates”:
    1.     Create an exact copy of the source tables into a staging environment. This is going to be done with the initial mappings.
    2.     Once the initial DW database is built, keep the staging tables.
    3.     Create mappings to maintain the staging tables using as source the CDC tables.
    4.     The staging tables will always be in sync with the source tables.
    5.     In the dimension load mapping, “join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    6.     For “updates”, use the staging tables as lookup tables to get the current value of the column(s) that have not been changed.
    7.     Apply the updates in the dimension tables.
    Solution #2: Use the dimension tables as lookup tables when working with “updates”:
    1.     Delete the content of the staging tables once the initial datawarehouse database has been built.
    2.     Use the empty staging tables as a place to process the CDC records
    3.     Create mappings to insert CDC records into the staging tables.
    4.     The staging tables will only contain CDC records (i.e. new records, updated records, and deleted records)
    8.     In the dimension load mapping, “outer join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    5.     For “updates”, use the dimension tables as lookup tables to get the current value of a column(s) that has not been changed.
    6.     Apply the updates in the dimension tables.
    Solution #1 uses staging tables as lookup tables. It requires extra space to store copies of source tables in a staging environment, and the dimension load mappings may take longer to run because the staging tables may contain many records that may never change.
    Solution #2 uses the dimension tables as both the lookup tables as well as the destination tables for the “updates”. Notice that the dimension tables will be updated with the “updates” AFTER they are used as lookup tables.
    Any other approach that you guys may suggest? Do you see any other advantage or disadvantage against any of the above solutions?
    Any comments will be appreciated.
    Thanks.

    hi,
    can you please tell me how to make the JDBC call. I triedit as:
    1. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "jdbc:oracle:thin");
    and
    2. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "thin");
    -as given in http://www.acs.ilstu.edu/docs/oracle/server.101/b10785/jm_opers.htm#CIHJHHAD
    The 1st one is giving the error:
    Caused by: oracle.jms.AQjmsException: JMS-135: Driver jdbc:oracle:thin not supported
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:330)
    at oracle.jms.AQjmsTopicConnectionFactory.<init>(AQjmsTopicConnectionFactory.java:96)
    at oracle.jms.AQjmsFactory.getTopicConnectionFactory(AQjmsFactory.java:240)
    at com.ivy.jms.JMSTopicDequeueHandler.init(JMSTopicDequeueHandler.java:57)
    The 2nd one is erroring out:
    oracle.jms.AQjmsException: JMS-225: Invalid JDBC driver - OCI driver must be used for this operation
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:288)
    at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:1307)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
    at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
    at com.ivy.jms.JMSTopicDequeueHandler.receiveMessages(JMSTopicDequeueHandler.java:115)
    at com.ivy.jms.JMSManager.run(JMSManager.java:90)
    at java.lang.Thread.run(Thread.java:619)
    Is anything else beyond this is required??? please help. :(
    oracle: 10g R4
    linux environment and java is trying to do AQjmsFactory.getTopicConnectionFactory(...); Java machine is diffarent from the database and no oracle client is to be installed on java machine.
    The same code is working fine when i use oc8i instead of thin drivers and run it on db machine.
    ravi

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Oracle Best Practices in 10g When Disabling NUMA

    We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?

    user10387007 wrote:
    We have both Linux and Solaris 10 DB servers running 10g on them with NUMA. What is Oracle's best practice for dealing with NUMA in hardware when it can be disabled?How are you using NUMA (Non Uniformed Memory Access)?
    NUMA can be implemented at CPU level - in which case CPU affinity becomes important. NUMA can be used across an Interconnect (e.g. SCSI over RDMA protocol).
    So it depends on what you mean by NUMA and how you are using it (and whether or not it is used by the Oracle s/w stack itself).

  • Best practice for Video over IP using ISDN WAN

    I am looking for the best practice to ensure that the WAN has suffient active ISDN channels to support the video conference connection.
    Reliance on load threshold either -
    Takes to long for the ISDN calls to establish causing the problems for video setup
    - or is too fast to place additional ISDN calls when only data is using the line
    What I need is for the ISDN calls to be pre-established just prior to the video call. Have done this in the past with the "ppp multilink links minimum commmand but this manual intervention isn't the preferred option in this case
    thanks

    This method is as secure as the password: an attacker can see
    the hashed value, and you must assume that they know what has been
    hashed, with what algorithm. Therefore, the challenge in attacking
    this system is simply to hash lots of passwords until you get one
    that gives the same value. Rainbow tables may make this easier than
    you assume.
    Why not use SSL to send the login request? That encrypts the
    entire conversation, making snooping pointless.
    You should still MD5 the password so you don't have to store
    it unencrypted on the server, but that's a side issue.

Maybe you are looking for