Best practice in datatypes

Hello all
I'm new to Oracle (have used other DBMSs) and I'm missing a couple of datatypes, so I'm looking for advice as to how to replace them:
boolean: I will need to handle 3-valued logic (true, false and NULL) directly in SQL syntax, i.e. COALESCE, IN, etc.
cidr/inet: how do you handle IPv4 network addresses? I can think of a CHAR(15) and store it directly as a string, but I would like to be able to operate on it to some extent, i.e. validate network masks, broadcast address, etc. Are there some add-on datatypes/operators to handle this?
TIA,
cl.

For boolean values, you normally define a CHAR(1) (or VARCHAR2(1)) with a CHECK constraint that restricts the values to one of two values, i.e.
CREATE TABLE t (
  bool_col CHAR(1) CHECK( bool_col IN( 'Y', 'N' ) )
)In PL/SQL, you can then use the BOOLEAN data type if you'd like.
For IP addresses, my default would be to declare a VARCHAR2(15) and store the information there. For manipulation, I'd define an IP Address object that would encapsulate whatever IP address logic you want. If you're on 10g, you can also use the regular expression package to define a constraint that ensures a well-formed address.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • NLS data conversion – best practice

    Hello,
    I have several tables originate from a database with a single byte character set. I want to load the data into a database with multi-byte character set like UTF-8, and in the future, be able to use the Unicode version of Oracle XE.
    When I'm using DDL scripts to create the tables on the new database, and after that trying to load the data, I receive a lot of error messages regarding the size of the VARCHAR2 fields (which, of course, makes sense).
    As I understand, I can solve the problem by doubling the size of the verachar2 fields: VARCHAR2(20) will become VARCHAR2(40) and so on. Another option is to use the NVARCHAR2 datatype, and retain the correlation with the number of characters in the field.
    I never used NVARCHAR2 before, so I don't know if there are any side affects on the pre-built APEX processes like Automatic DML, Automatic Row Fetch and the likes, or on the APEX import data mechanism.
    What will be the best practice solution for APEX?
    I'll appreciate any comments on the subjects,
    Arie.

    Hello,
    Thanks Maxim and Patrick for your replies.
    I started to answer Maxim when Patrick post came in. It's interesting as I tried to change this nls_length_semantics parameter once before, but without any success. I even wrote an APEX procedure to run over all my VARCHAR2 columns, and change them to something like VARCHAR2(20 char). However, I wasn't satisfied with this solution, partially because what Patrick said about developers forgetting the full syntax, and partially because I read that some of the internal procedures (mainly with LOBs) do not support this character mode and always working with byte mode.
    Changing the nls_length_semantics parameter seems like a very good solution, mainly because, as Patrick wrote, " The big advantage is that you don't have to change any scripts or PL/SQL code."
    I'm just curious, what is the technique APEX is using to run on all various, SB and MB character sets?
    Thanks,
    Arie.

  • Code Set pattern or best practice?

    Hi all,
    I have what I would have thought to be a common problem: the best way to model and implement an organization's code sets. I've Googled, and I've forumed - without success.
    The problem domain is this: I'm redeveloping an existing application, which currently represents it's vast array of code sets using a seperate table for each set. There are currently 180+ of these tables. Not a very elegant approach at present. The majority of these code sets are what I would class as "simple" - a numeric value associated with a textual description - eg 1 = male, 2 = female, or 1 "drinks excessively", 2 "drinks sometimes" ... etc. Most of these will just be used to associate a value with a combo box selected value.
    There are also what I would class as "complex" code sets, which may have 1..n attributes (ie not just a numeric and text value pair). An example of this (not overly complex) is zip code, which has a unique identifier, the zip code itself (which may change - hence the id), a locality description, and a state value.
    Is there a "best practice" approach or pattern which outlines the most efficient way of implementing such code sets? I need to consider performance vs the ability to update the code set values, as some of them may change from time to time without notice at the discretion of government departments.
    I had considered hard coding, creating classes to represent each one, holding them in xml files, storing in the database etc, but it would seem that making the structure generic enough to cater to varying numbers of attributes and their associated datatypes will be at the cost of performance.
    Any suggestions would be greatly appreciated.
    Thanks.
    Paul C.

    Hi Saish,
    Thanks for your response. Yes, this approach is what
    I had considered - I'll be using Hibernate so these
    values will be cached etc.
    I guess my main concern is reducing the huge number
    of very small tables in use. I was thinking about
    this some more, and for the simple tables was
    thinking of 2 tables: 1 (eg "CODE_SET") to describe
    the code set (or ref table etc) in question, the
    second to hold the values. This way 80 odd tables
    would be reduced to 2. Not sure what's best here -
    simpler ER diagram or more performance!Tables...
    Enumeration
    - EnumerationId
    - EnumerationName
    - EnumerationAbbreviation
    EnumerationValues
    - EnumerationId
    - ValueIndex
    - ValueName
    - ValueAbbreviation
    The above allows the names to change.
    You can add a delete flag if values might be deleted but old records need to be maintianed.
    Convention: In the above I specifically name the second table with a plural because it holds a collection of sets (plural) rather than a single set.
    In the first table the id is the key. In the second the id and the index are the key. The ids are unique (of course). The enumeration name should be unique in the first table. In the second table the EnumerationId and value name should be unique.
    Conversely you might choose to base uniqueness on the abbreviation rather than the name.
    The Name vs Abbreviation are used for reporting/display purposes (long name versus short name).
    It is likely that for display/report purposes you will have to deal with each of the sets uniquely rather than a group. Ideally (strongly urged) you should create something that autogenerates a java enumeration (specific with 1.5 or general with 1.4) that uses the id values and perhaps the indexes as the values and the names are generated from the abbreviations. This should also generate the database load table for the values. Obviously going forward care must be taken in how this is modified.

  • Data access best practice

    Oracle web site has an article talking about the 9iAS best practice. Predefining column type in the select statement is one of topics. The detail is following.
    3.5.5 Defining Column Types
    Defining column types provides the following benefits:
    (1) Saves a roundtrip to the database server.
    (2) Defines the datatype for every column of the expected result set.
    (3) For VARCHAR, VARCHAR2, CHAR and CHAR2, specifies their maximum length.
    The following example illustrates the use of this feature. It assumes you have
    imported the oracle.jdbc.* and java.sql.* interfaces and classes.
    //ds is a DataSource object
    Connection conn = ds.getConnection();
    PreparedStatement pstmt = conn.prepareStatement("select empno, ename, hiredate from emp");
    //Avoid a roundtrip to the database and describe the columns
    ((OraclePreparedStatement)pstmt).defineColumnType(1,Types.INTEGER);
    //Column #2 is a VARCHAR, we need to specify its max length
    ((OraclePreparedStatement)pstmt).defineColumnType(2,Types.VARCHAR,12);
    ((OraclePreparedStatement)pstmt).defineColumnType(3,Types.DATE);
    ResultSet rset = pstmt.executeQuery();
    while (rset.next())
    System.out.println(rset.getInt(1)+","+rset.getString(2)+","+rset.getDate(3));
    pstmt.close();
    Since I'm new to 9iAS, I'm not sure whether it's true that 9iAS really does an extra roundtrip to database just for the data type of the columns and then another roundtrip to get the data. Anyone can confirm it? Besides the above example uses the Oracle proprietary information.
    Is there any way to trace the db activities on the application server side without using enterprise monitor tool? Weblogic can dump all db activities to a log file so that they can be reviewed.
    thanks!

    Dear Srini,
    Data level Security is not at all issue for me. Have already implement it and so far not a single bug in testing is caught.
    It's about object level security and that too for 6 different types of user demanding different reports i.e. columns and detailed drill downs are different.
    Again these 6 types of users can be read only users or power users (who can do ad hoc analysis) may be BICONSUMER and BIAUTHOR.
    so need help regarding that...as we have to take decision soon.
    thanks,
    Yogen

  • Changing a Cube - SAP Best Practice

    I have a situation where a Consultant we have is speaking of a SAP Best Practice but cannot provide any documentation support the claim.
    The situation is that a change has been made in BW Dev to a KF (changed the datatype).  Of course the transport fails in the BW QA system.  OSS note 125499 suggest activating the object manually. 
    To do I will need to open up the system for changes and deactivate the KF in question, then a core SAP BW table (RSDKYF) is to be modified to change the datatype.   Then upon activation of the KF, the data in the cube will be converted.
    If I delete the data in the cube, apply the transport, and then reload from PSA would this work also?  I would rather not have to open up the systems and have core BW tables being modified.  That just doesn't seem like a best practice to me. 
    Is this practice a SAP Best Practice?
    Regards,
    Kevin

    Hello Kevin,
    opening the system for manual changes is not best practice. There are only few exceptional cases where this is necessary (usually documented in SAP notes).
    "Easy" practice would be to add a new key figure instead of changing the data type. Obviously this causes some rework in depended objects but transport will work and no table conversions will be required.
    "Save" practice is to drop and reload the data. You can do it from PSA if the data is still available. Or create a backup InfoCube and use data mart interface to transfer data between the original and backup.
    Regards
    Marc
    SAP NetWeaver RIG

  • What is the best practice?

    Hi Everybody,
    I would like to know the best practice for the following scenario. Would appreciate any help on it.
    We plan to store XML documents in Oracle DB and then do searches using Intermedia on it.
    The question I have is,
    1. Should I use a clob datatype or varchar2, What is the best practice?
    2. Which would give me the best performance?
    3. Do we have some performance statistics published somewhere?
    I'd be grateful if you could pass on any other comments or info on it.
    Thanks
    Manoj

    Hi Manoj
    We had a similiar requirement here for HTMl documents
    You can use BLOB datatype And then create text index.
    One thing just use CTX_user.LOG and see if it indexes these documents
    then use the CONTAINS query and confirm it.
    this should work .
    thanks
    null

  • Application preferences system - best practice

    Hi,
    I'm migrating a forms application from 6i to 11g
    The application source forms are the same for all the possible deployments , which are set in different databases (let's say one database for each client).
    But as you may expect, each client wants a customized app, so we have to define some kind of preferences, and have them into account in forms (and db packages).
    The problem:
    The application, as it was designed, has this customizing system spread over different solutions:
    A database table with one row for each custom parameter.
    A database package constants.
    Forms global variables defined at the main menu.
    Even, instead of defininig a good set of properties, I'm finding a lot of code with "if the client is one of this then ... else ..." sentences. Sometimes implemented with instr and global variables defining groups of clients ... bufff....
    The question:
    I'd like to take advance of the migration process to fix this a little bit. Can you give advice on a best practice for this?

    Thanks. I was hoping there would be something better than both approaches (package constants or parameter table) bundled within the database.
    Of course the
    if customer name = 'COMPANY_A' then use this logic ...
    gives the creeps to anyone.
    Instead of this everything should be
    if logic_to_do_this = 'A' then ...
    There are two minor problems with the table approach:
    Single row and one column for each parameter? (thus you can control the parameter datatype, but need to ddl everytime you add a parameter.
    or a parameter table with parameter name/value pairs? (here you'd have to go with varchar for everything, but you could have a set of pre-established conversion masks)
    I prefer the second (you only need to establish masks for number and date parameters), but I'm a bit late, as the app has been working with a single row parameter table from the beginning, and they even didn't wrap it with a getter function.
    In fact, in an old forms application I developed where customization was paramount, I remember I used two tables master / detail: the master table for the parameter "context" definition and the detail table for the values and dates of entry in force (the app worked 24x7 and users even needed to program changes in preferences in advance). A getter function gets you the value currently in force.

  • Coldfusion, MS SQL, Hash Best Practices,...

    Hello,
    I am trying trying to store hashed data (user password) in an
    ms sql database; the datatype in the database is set to varbinary.
    I get a datatype conflict when trying to insert the hashed data. It
    works when the datatype in the database is set to varchar.
    I understand that you can set your hash function with
    arguments that will convert the data before sending to the
    database, but I am not clear on how this is done. Now, along with
    any assistance with the conversion, what exactly is the best
    practice for storing the hash data? Should I store as varcahar or
    varbinary? Of course, if varchar I won't have the problem, but I am
    interested in best practices as well.
    Thnx

    brwright,
    I suggest parameterizing your queries to add protecting from
    injection.
    http://livedocs.adobe.com/coldfusion/6.1/htmldocs/tags-b20.htm
    hashing is best suited for passwords because the encryption
    is one way, once encrypted using hash() it can't be decrypted.
    Other fields that you might want to encrypt and still have the
    ability to decrypt, you can use the encrypt() and decrypt()
    functions.
    http://livedocs.adobe.com/coldfusion/6.1/htmldocs/functi75.htm
    I think there are also new encryption functions available in
    coldfusion 8...

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best practice on sqlite for games?

    Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
    I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
    So I have a few questions:
    First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
    Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
    Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
    Any thoughts / best practice / recommendations are very appreciated. Thank you!

    I'll just post my own reply to this.
    What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
    This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
    So far this has worked best for me. If anyone needs some example code, let me know and I can post it.

Maybe you are looking for