SIMPLE Database Design Problem !

Mapping is a big problem for many complex applications.
So what happens if we put all the tables into one table called ENTITY?
I have more than 300 attributeTypes.And there will be lots of null values in the records of that single table as every entityType uses the same table.
Other than wasting space if I put a clustered index on my entityType coloumn in that table.What kind of performance penalties to I get?
Definition of the table
ENTITY
EntityID > uniqueidentifier
EntityType > Tells the entityTypeName
Name >
LastName >
CompanyName > 300 attributeTypes
OppurtunityPeriod >
PS:There is also another table called RELATION that points the relations between entities.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
check the coloumn with WHERE _entityType='PERSON'
as there is is clustered index on entityType...there
is NO performance decrease.
there is also a clustered index on RELATION table on
relationType
when we say WHERE _entityType ='PERSON' or
WHERE relationType='CONTACTMECHANISM'.
it scans the clustered index first.it acts like a
table as it is physically ordered.I was thinking in terms of using several conditions in the same select, such as
WHERE _entityType ='PERSON'
  AND LastName LIKE 'A%' In your case you have to use at least two indices, and since your clustered index comes first ...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Have you ever thought of using constraints in your
modell? How would you realize those?
...in fact we did.We have arranged the generic object
model in an object database.The knowledge information
is held in the object database.So your relational database is used only as a "simple" storage, everything has go through your object database.
But the data schema is held in the RDBMS with code
generation that creates a schema to hold data.If you think that this approach makes sense, why not.
But in able to have a efficent mapping and a good
performance we have thought about building only one
table.The problem is we know we are losing some space
but the thing is harddisk is much cheaper than RAM
and CPU.So our trade off concerated on the storage
cost.But I still wonder if there is a point that I
have missed in terms performance?Just test your approach by using sufficiently data - only you know how many records you have to store in your modell.
PS: it is not wise effective using generic object
models also in object databases as CPU cost is a lot
when u are holding the data.I don't know if I'd have taken your approach - using two database systems to hold data and business logic.
PS2: RDBMS is a value based system where object
databases are identity based.we are trying to be in
the gray area of both worlds.Like I wrote: if your approach works and scales to the required size, why not? I would assume that you did a load test with your approach.
What I would question though is that your discussing a "SIMPLE Database Design" problem. I don't see anything simple in your approach when it comes to implementation.
C.

Similar Messages

  • OO / database design problem.

    We are building an OO system with a database back end and I seem to come across the same problem with design a number of times.
    Say we have an object 'Transaction'. Each Transaction can be one of three types, Debit, Credit, Receipt.
    The database has a table of TransTypes, with three entries. There is also a table of transactions, and the transaction table has a foreign key to the TransType table to indicate which type each transaction is.
    I define a TransType class, and a Transaction class that holds a reference to a TransType object to indicate with type the transaction is.
    The problem is how to perform different processing depending on the type of the transaction. Eg. in psuedo code:
    If the type of the transaction is Credit, then add the amount to the balance. If it is a Debit or Receipt, then subtract the amount.
    How would this be done in code? How can you test which type object the transaction has? The only thing that differentiates the three types in the database is the name (a string), and saying
    If Trans.TransType.Name = "Credit" then
    (add balance)
    else
    (subtract balance)
    end if
    is no good at all.
    There is something else to note: All the database IDs are Guids, and the database is cleaned out and rebuilt regularly, so theres no use remembering the ID of each type.
    According to OO principles, the type class should encapsulate the behaviour.
    I can see two solutions.
    1. Add a boolean flag to the Type table named 'AddsToBalance' or similar. Also add boolean attribute the type class. Then, the test becomes
    If Trans.TransType.AddsToBalance then
    (add balance)
    else
    (subtract balance)
    end if
    But this is a fairly limited approach.
    2. Define a base class of TransType, say 'TransTypeBase'. Then create three subclasses: TransTypeCredit, TransTypeDebit, TransTypeReceipt.
    Then, I can either test like this:
    If Typeof(Trans.TransType) Is TransTypeCredit then
    (add balance)
    else
    (subtract balance)
    end if
    or, I can provide an overridable Property AddsToBalance in TransTypeBase that each subclass overrides to return the correct value, and do:
    If Trans.TransType.AddsToBalance then
    (add balance)
    else
    (subtract balance)
    end if
    which is the same as the previous solution, except that the AddsToBalance property is not saved to the database at all but is implemented in the code that defines the class.
    Problem with Solution 2:
    When I retrieve the TransTypes from the database and create the TransType objects, how do I know whether to create a Credit, Debit, or receipt TransType object?
    I could add a field into the TransType table which is "TypeID", which is an integer (1 = Credit, 2 = Debit, 3 = Receipt), and then perform a select case. I don't really mind having a select case here because its only when retrieving data - Dbs are not OO so theres always a clash somewhere.
    Anyway if you've read this far thanks for sticking with it, I hope I've explained the problem well enough.
    What do people think of these solutions
    Does anyone know the 'proper' way to do this?
    Thanks in advance
    Lindsay

    Someone else already answered this, but I think maybe that answer could be clarified.
    This is an example of where the OO model differs from the data model. From the OO perspective, since you have three different types of transactions, you probably want to have three transaction classes with a common superclass/interface:
    abstract class Transaction
        abstract Balance updateBalance(Balance in);
    class Credit extends Transaction
    class Debit extends Transaction
    class Receipt extends Transaction
        //...Whether these should be interfaces or classes is really up to what you want out of the design. I would go with classes because a transaction, in my eyes, is an atomic entity (you probably wouldn't have a class that implements Transaction and something else). As a side note, updateBalance deals with Balance objects so that we don't get into a debate over whether we should be using double or BigDecimal or whatever :-).
    To get transactions -- and translate from database representation to the object model -- you'd have a TransactionFactory class. This might look something like the following:
    public class TransactionFactory
        public Transaction getTransactions(Account acct)
    }

  • Database design problem for multiple language application

    Hi All,
    We are working on a travelling portal and for a traveling portal its content and details are the heart.we are planning to have it in multiple locale so this means we need to handle the dynamic data for each locale.
    currently we have following tables
    Destination
    Transport
    Places of Interests
    user comments etc.
    each table contains a lot of fields like
    Name
    ShortDescription
    LongDescription
    and many other fields which can contains a lot of data and we need to handle data in locale specific way.
    i am not sure how best we can design an application to handle this case,one thing came to my mind is like putting extra column for each locale in each table but that means for a new locale things needs to be changed from database to code level and that is not good at all.
    Since each table contains a lot of columns which can contain data eligible for internationalization so my question is what might be the best way to handle this case
    After doing some analysis and some goggling one approach that came to my mind is as below..
    i am planning to create a translation table for each table,like for destination i have the following design
    table languages
    -- uuid (varchar)
    -- language_id(varchar)
    -- name (varchar)
    table Destination
    --uuid (varchar)
    other fields which are not part of internationalization.
    table Destination_translation
    -- id (int)
    -- destination_id (int)
    -- language_id (int)
    -- name (text)
    -- description(text)
    Any valuable suggestion for above mentioned approach are most welcome...

    This approach sounds reasonable - it is the same approach used by Oracle Applications (Oracle ERP software). It de-normalizes information into two tables for every key object - one contains information that is not language sensitive and the other contains information that is language sensitive. The two tables are joined by a common internal id. Views are then created for each language that join these tables based on the common id and the language id column in the second table.
    HTH
    Srini

  • Simple Database Design - Any Suggestions?

    I built a sample non real-life database for an automobile agency. The agency have some sellers, cars models, and buyers. I have to keep track of all of them.
    These are the process i have taken while designing:
    Normalization 1, 2 and 3NF and Relationship one to one and one to many...
    Italic column is Primary Key...
    The database not realistic, i did it for practicing purpose. Please look at the schema and tell me do you think its ok so far? or the design is wrong?
    Here is my schema:
    Seller table stores seller information:
    *Seller Table {+SellerID+ (number), first name (text), last name (text), address (text), city (text), CountryID (foreign key), date hired (date)}*
    Customer table stores customer information, customer can deal with one seller at a time while a seller can deal with more than one customer at a time:
    *Customer Table {+CustomerID+ (number), first name (text), last name (text), address (text), city (text), CountryID (foreign key), SellerID (foreign key) }*
    Car table stores information about cars available:
    *Car Table {+CarID+ (number), manufacturer (text), model (text), car year (text) }*
    Purchased car table hold information about customers who purchased a car and type car has been purchased as well as the date of purchase (date purchased relies on both field CarID and CustomerID:
    *Purchased Car Table {+CarID+ (foreign key), +CustomerID+ (foreign key), date purchased (date)}*
    OR
    *Purchased Car Table {+PurchasedCarID+ (number), CarID (foreign key), CustomerID (foreign key), date purchased (date)}*
    *Country Table {+CountryID+ (number), country name (text)}*
    A customer and a seller can store several phone number:
    *Phone Table {+PhoneID+ (number), CustomerID (foreign key), phone number (text)}*
    If i want a seller to store more than one number, will i have to make a new table that stores sellers phone numbers? Or there is another better way to do it?
    Note: i created country table and phone number table since they are considered multi-valued fields. To achieve 1NF, i must let every field in the table (e.g. Seller, and Customer) be atomic.

    You need to study normalization and the concepts underlying 3rd, 4th, and Boyce-Codd normalization.
    This forum is not an appropriate place to teach such a class.

  • Database Design Problem

    Hi All,
    I would like to know how we can store the Account, Invnetory, Customers and Vendor details of a Company having 30 Odd divisions and maintianing them separately in a Single Database with ORACLE Standard Version loaded on HP UX Unix M/c.
    Currently I have made a set of tables which I copy for each company in a spearate tablespace by the Company name and working. But by doing this I have to dynamically code all my PRocedures to find out which Company the user has logged in and then Execute them.
    ie. the Sql's look like
    var := 'SELECT * FROM ' || COMPID || '_CUSTOMERS WHERE CUSTID = ' || CHR(39) || VCUSTID || CHR(39).
    and execute them using the ORACLE 8i's built in EXECUTE IMMEDIATE clause to get the result. This type of generating SQL & executing does not allow any scope for SQL tuning.
    Help in desiging will be highle solicited.
    Thanks to all,
    Pleas Write a mail at [email protected]
    Regards
    ravi

    Hi Ravi,
    I think u'b better track user using the audit user feature of Oracke instead of Log Miner or perhaps some trigger only depending on what u wanna know about ur users. Dont think u want to know ALL what ur users have done...
    About ur question "how can we know a which user wants to work in which Company".
    If u have one schema for one company, it's easy to set permissions access to a schema to certain user and not to all.
    Example :
    U got 2 schemas : comp1 and comp2
    And 2 users : bob and bill
    U just have to set up permissions to bob to permit him to access tables from schema comp1 and to schema comp2 to bill so each user will only have access to one company.
    Fred

  • Web Page and Database Design Problem

    Ok, so I'm trying to develop an app, but am having a few problems and need some advice/help.
    I have a webpage made up of jsp pages. These pages will contain forms that will either list info from the databse, or allow users to enter data to submit to the DB.
    So I will have Servelts that will process the form information.
    I also have written DAO interfaces for different tables. For example I have a config table which olds keys and there values.
    This information will only ever be displayed so I have an interface which getAll() and get(String key).
    I want to avoid putting code like below going into the DAO
    ctx = new InitialContext();
    javax.sql.DataSource ds
    = (javax.sql.DataSource) ctx.lookup (dataSource);
    conn = ds.getConnection();
    PreparedStatement stmt = conn.prepareStatement(query);
    ResultSet records = stmt.executeQuery();
    I'd prefer to make calls from my DAO getAll() method to another class which would create the connection, and query the db, and return the ResultSet, so that I can store/manipulate it anyway I wish, before passing the results back to the servlet.
    Problem is the records seem to come back null!

    ctx = new InitialContext();
    javax.sql.DataSource ds = (javax.sql.DataSource) ctx.lookup (dataSource);
    conn = ds.getConnection();You should have a Connection Pool to create a connection and hand it out to whoover ask for it.
    Check out Hibernate
    Hibernate can manger your connection (you need to set the xml configuration first) .. It's another layer to encapsulate your JDBC connection, request, etc..
    Also check out Spring Framework. Spring can makes Transaction much more easy to implement (using aspect)
    It provides a handful of useful api for you to work with. like JdbcTemplate, HibernateTemplate.

  • Database design problem. Need Help?

    I have to display records in a table in a UI where users are allowed to move records to specify priority for the records.
    Here is an example, a db table has 4 records:
    RecordId Ranking
    A 1
    B 2
    C 3
    D 4
    Now if a user tries to set the Ranking of the record with RecordId "D" to say 1, I now have to update all records in this table. So the result should be the following:
    RecordId Ranking
    D 1
    A 2
    B 3
    C 4
    This means that if a user wants to change the Ranking of the last record to be that of the first record, I have to update all records in the table. I am trying to figure out an algorithm that can do this better than deleting all records and adding new records or updating all records. Any help will be greatly appreciated. Thank you in advance.
    null

    Well, does your computer actually support the required features? Is OpenGL enabled in PS? If not, you can't do anything with 3D. OpenGL support is mandatory. If it doesn't work you can try to update your graphics driver, but otherwise there is no way to enforce it. Also did you uninstall your Design Standard before installing Premium? if it's still using the old serioal number, then the features wouldn't show up in PS, eitehr.
    Mylenium

  • Help with Designing a Simple Database

    I am currently working on a designing problem I would appreciate if someone could review my solution.
    The Problem:
    I need to create a simple database that contains the following entries�
    First Name //mandatory
    Last Name //mandatory
    Date of Birth //mandatory
    Hobbies //there could be anywhere from 0 to infinite amount of hobbies
    Type of actions that I need to perform on the database�
    Add, delete, and modify and entry
    Below are a two design solutions I came up with�
    For both solutions I am going to create two text files. One of the text files called profiles.txt will contain the following fields on each line�
    Id, First Name, Last Name, Date of Birth
    //the Id field in this text file will be the primary key so you will not see the Id duplicated
    The other text file called hobbies.txt will contain the following fields on each line�
    Id, hobby
    //the Id field can be duplicated in this text file so a person can be linked to zero or several hobbies
    Now what differs between my solutions is how I am going to read this data into my program�
    Solution 1) When you start the program it will read the profiles.txt into a linked list. After that is finished the program will then load the hobbies into several linked list that the profiles linked list will point to. So basically each person will have a linked list of hobbies associated with him or her.
    Problem I see with this solution is that if there were 200 million people contained in the profiles.txt would my program crash since the computer would not have enough memory to load all of those names?
    Solution 2) Instead of loading the data at the start of the program the data will stay in the text files. So when someone does a search it will open the text file and search for the entry.
    Problem with this solution is it would be hard to delete and modify names (would I have to rewrite the text file every time I do a change?). Would a good fix to this problem be creating a separate text file to keep track of any changes or deletions I do and once in a while do a database maintenance?
    So a review of my questions is�
    1)     Would my program crash if I had 200 million entries if I use my solution 1?
    2)     Is my solution 2 possible without being incredibly slow or complicated?
    3)     Is there another way of doing this I have not thought of?

    I think having one option will do. Now the problem with this text file thing is that, we'll hve to read every information into memory if we are running a test driver for the program and then work on the information in memory.
    After the program closes, whatever changes we made to this data in memory shd be written to file so we need to find a way of writing the data from memory to overwrite the file. I hope you kinda get what i'm talking abt.
    the database will consist of information like this
    String firstName
    String lastName
    String DOB
    ArrayList / Vector Hobbies
    Now, we kinda want to declare a class with with all these information as data fields ok.
    so let's say
    public class Try{
    String firstName
    String lastName
    String DOB
    ArrayList / Vector Hobbies
    and then create an instance of this class in the driver
    which will be an ArrayList of this class or something so each index of this class ArrayList will hve it's unique data information from the file we read in but again, this is kinda working in memory right.
    After doing all we have to do, we want to write back to file all the changes we made to the data in memory. That's where we are kinda stuck right now.
    A member of the group was suggesting we call whatever functions to work on the txt file which will mean we'll hve to re-write each time we call a function to operate on it and all that stuff. This is a slow process.
    will be glad if anybody out there will have a better way to implement this. Thanks a lot.

  • Simple Database Problem

    hello Sir,
    I have a Select Box and I allow MultiSelect to the User,
    now what exactlt I want to know is that
    say that if I am using SQL as the database or any other..database
    how many FIELDS should I CREATE in the database..
    cause only at runtime I shall know what the User has Selected.
    I know One way that is that I Create only I field in the database,
    and ADD all that the user has selected from the (List BOX)
    Select to a STRING.
    and then add the String to that Field to the Database...
    is there any other better way could any one please tell me.
    With Regards
    Eklavya

    ...and thats exactly what I have told you how to do. You don't make n fields in one table to hold n choices, because as you have discovered ...you will never know in advance how many fields you need to create. This is a classic database design challenge and is resolved by using another table completely, dedicated to holding rows (not fields) of user choices. An id number that relates to both the rows of choices in the choices table and the users id in the users table is how you relate who's choices are what. You get the values by joining the two tables in your query.
    User Table "USERS":
    userID (P) | userName | userSex
    1 Bill F
    2 Jan M
    Choices Table "CHOICES":
    PrimKey (P) | userID (F) | Choice
    1 2 blue
    2 2 red
    3 1 green
    4 2 black
    5 1 red
    Then, "select userName, Choice from USERS, CHOICES
    where USERS.userID = CHOICES.userID
    and USERS.userID = 1"
    You will get green and red, the choices for user 1 as your result.
    I can't really explain it any more clearly than this. You need to do some reading about RELATIONAL databases or you are going to be off in the wrong direction alot. What we have done here is taken advantage of a RELATIONSHIP between two tables, hence the name RELATIONAL database.
    I hope this helps, I will be in trouble here for discussing such a non-Java related topic. Good luck.

  • IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan

    Hi Experts,
    IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan for Daily/weekly??
    Vinai Kumar Gandla

    Hi Vikki,
    Many systems rely solely on SQL Server to update statistics automatically(AUTO UPDATE STATISTICS enabled), however, based on my research, large tables, tables with uneven data distributions, tables with ever-increasing keys and tables that have significant
    changes in distribution often require manual statistics updates as the following explanation.
    1.If a table is very big, then waiting for 20% of rows to change before SQL Server automatically updates the statistics could mean that millions of rows are modified, added or removed before it happens. Depending on the workload patterns and the data,
    this could mean the optimizer is choosing a substandard execution plans long before SQL Server reaches the threshold where it invalidates statistics for a table and starts to update them automatically. In such cases, you might consider updating statistics
    manually for those tables on a defined schedule (while leaving AUTO UPDATE STATISTICS enabled so that SQL Server continues to maintain statistics for other tables).
    2.In cases where you know data distribution in a column is "skewed", it may be necessary to update statistics manually with a full sample, or create a set of filtered statistics in order to generate query plans of good quality. Remember,
    however, that sampling with FULLSCAN can be costly for larger tables, and must be done so as not to affect production performance.
    3.It is quite common to see an ascending key, such as an IDENTITY or date/time data types, used as the leading column in an index. In such cases, the statistic for the key rarely matches the actual data, unless we update the Statistic manually after
    every insert.
    So in the case above, we could perform manual statistics updates by
    creating a maintenance plan that will run the UPDATE STATISTICS command, and update statistics on a regular schedule. For more information about the process, please refer to the article:
    https://www.simple-talk.com/sql/performance/managing-sql-server-statistics/
    Regards,
    Michelle Li

  • A very simple database system with JSON

    If we need to store some data in a database, but without the need of advanced SQL features, can we use this scheme (written here in Javascript / node.js) :
    // the DB will be in RAM !
    var myDb = {};
    // read DB from disk if file exists
    try { myDb = JSON.parse(fs.readFileSync(DBFILENAME)); } catch(e) { }
    // serialize to disk every minute or when process terminates
    function serialize() { fs.writeFile('./myDb.json', JSON.stringify(myDb)); }
    setInterval(serialize, 60 * 1000);
    process.on('SIGTERM', serialize); process.on('SIGINT', serialize);
    myDb['record1'] = 'foo';
    myDb['record2'] = 'bar';
    See
    the longer version here as a gist (8 lines of code).
    1) Does this DB practice have a name? Is it really so bad? Is it possible to use such a
    10-lines-of-code DB system, even in production of websites that have a < 1 GB database ?
    2) Scalability: until which size would this system work without performance problems?
    i.e. would it work until 2GB of data on a a normal Linux server with 4GB RAM? Or would there be real performance problems?
    Note: a minute seems enough to write a 2GB data to disk... Of course I admit it is 100% non-optimized, we could add
    diff feature between n-1th and nth writing to disk...
    3) Search: can I use ready-to-use tools to do some search in such a "simple" database? Lucene, ElasticSearch, Sphinx, etc. something else?

    Nothing is wrong with this for development. If it has a name, I suppose it would be a mock database. It is not uncommon to create a mock database that can emulate very basic functionality. You have the added advantage that you start from a scratch
    database each and everytime, thus you know that your program would work also for a potential empty nosql database for the same reason.
    However this is not a reasonable permanent solution by any means.
    Most programmers, due to the small overhead, will simply go ahead and make it work with a nosql database. It may take slightly longer, you are also programming directly to work in production and not being forced to adapt your program and test it beforehand.
    Scalability is a non-issue because you're always working in development. If you crash your own computer, it is not that big of a deal. The limit of such a database would be only that of your RAM (or the RAM of the computer running the server), however I
    think you'd find that you'll find that the program gets very slow before you even reach the point when your program will crash.
    Perhaps you could adapt some searching mechanism for the mock database, but if you're going to go through the trouble, just go ahead and use a proper nosql database. If you literally lose more than 1 hour working on this mock database, then you've wasted
    time.

  • Re: (forte-users) Round-trip database design

    We have used Erwin quite sucessfully, but it's not cheap.
    "Rottier, Pascal" <Rottier.Pascalpmintl.ch> on 02/15/2001 04:51:01 AM
    To: 'Forte Users' <forte-userslists.xpedior.com>
    cc:
    Subject: (forte-users) Round-trip database design
    Hi,
    Maybe not 100% the right mailing list but it's worth a try.
    Does anyone use tools to automatically update the structure of an existing
    database?
    For example, you have a full database model (Power Designer) and you've
    created a script to create all these tables in a new and empty database.
    You've been using this database and filling tables with data for a while.
    Now you want to do some marginal modifications on these tables. Add a
    column, remove a column, rename a column, etc.
    Is there a way to automatically change the database without losing data and
    without having to do it manually (except the manual changes in the (Power
    Designer) model).
    Thanks
    Pascal Rottier
    Atos Origin Nederland (BAS/West End User Computing)
    Tel. +31 (0)10-2661223
    Fax. +31 (0)10-2661199
    E-mail: Pascal.Rottiernl.origin-it.com
    ++++++++++++++++++++++++++++
    Philip Morris (Afd. MIS)
    Tel. +31 (0)164-295149
    Fax. +31 (0)164-294444
    E-mail: Rottier.Pascalpmintl.ch
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

    Hello Pascal,
    Forte has classes which might be able to scan the database structure
    (DBColumnDesc,etc.). Express use this classes to determine how the
    BusinessClass looks like. We use Forte to create the tables,indexes and
    constraints. We have the Problem that the above described classes are only
    readable but not fillable. The solution for us will be to create our own
    classes in
    the same manner than existing classes are. So we are able to make updates in
    the database structure and maybe able to change the database tables with tool
    code. Another reason for us to have the database structure in the
    application is the
    ability to see the table structure on which the Forte code works always up
    to date
    with the code. You are always able to compare the structure of the database
    with
    your businessclasses and able to convert a wrong structure to the correct
    structure
    with maybe just a little piece of code.
    Hope this helps
    Joseph Mirwald

  • Time-series / temporal database - design advice for DWH/OLAP???

    I am in front of task to design some DWH as effectively as it can be - for time series data analysis - are there some special design advices or best practices available? Or can the ordinary DWH/OLAP design concepts be used? I ask this - because I have seen the term 'time series database' in academia literature (but without further references) and also - I have heard the term 'temporal database' (as far as I have heard - it is not just a matter for logging of data changes etc.)
    So - it would be very nice if some can give me some hints about this type design problems?

    Hi Frank,
    Thanks for that - after 8 years of working with Oracle Forms and afterwards the same again with ADF, I still find it hard sometimes when using ADF to understand the best approach to a particular problem - there is so many different ways of doing things/where to put the code/how to call it etc... ! Things seemed so much simplier back in the Forms days !
    Chandra - thanks for the information but this doesn't suit my requirements - I originally went down that path thinking/expecting it to be the holy grail but ran into all sorts of problems as it means that the dates are always being converted into users timezone regardless of whether or not they are creating the transaction or viewing an earlier one. I need the correct "date" to be stored in the database when a user creates/updates a record (for example in California) and this needs to be preserved for other users in different timezones. For example, when a management user in London views that record, the date has got to remain the date that the user entered, and not what the date was in London at the time (eg user entered 14th Feb (23:00) - when London user views it, it must still say 14th Feb even though it was the 15th in London at the time). Global settings like you are using in the adf-config file made this difficult. This is why I went back to stripping all timezone settings back out of the ADF application and relied on database session timezones instead - and when displaying a default date to the user, use the timestamp from the database to ensure the users "date" is displayed.
    Cheers,
    Brent

  • Database design (ERD )for Inventory Management System

    Dear All,
    I am going to develop a simple Inventory Management System software using C# .NET for my learning. After searching different forums, many people have suggested to first create a database design for the software. I want a database design, in short, an ERD
    diagram for simple Inventory Management System which shows proper entities(tables), attributes and relationship between entities.
    It would be highly helpful for me as I am newbie to C# and databases.
    Thanks,
    momersaleem

    Dear Rebecca,
    Thanks for you suggestions.
    As I am going to develop IMS for learning purposes so I think I wouldn't need to go in detail regarding Customer name and addresses. However, I am still thinking of adding country attribute in customers' table which I think will be helpful to sort out customers.
    What's the difference between a purchase and an order?  They're usually the same thing, which doesn't mean you're
    wrong, but what are you picturing here? Purchase entity will be used to keep record of purchases you made and an order entity will be used to keep record of orders that cutomers placed.
    Pricing:
    Any order system needs to manage two very distinct bits of data that are easy to confuse. The price in the Product entity is the current
    price. The price in the Order entity is the selling
    price. Not at all the same thing--current price is almost certainly going to change over time. Selling price won't.
    Does it mean that I'll change the price attribute for product to current_price and add selling_price to order table which will help to keep record of price at the time of order?
    Why did you include a quantity field in the Products table? Is it meant to represent stock on hand?
    Yes you are right. It represents stock in hand.
    Could you please recheck the entities relationships as I am not confirmed whether these are correct or not?
    Thanks,
    momersaleem

  • Urgent help database design

    Hi,
    I need an urgent help with the design of the database, We have already a database in production but we are facing a problem with extensibility.
    The client information is variant that is
    1) number of fields are different for each client
    2) client may ask anytime in future to add another field into the table
    Please provide your views with practical implication (advantages and disadvantages) or any resource where I can find information.....
    Help appreciated.....

    Hi,
    Database design is an art & science by itself - as far as I know, there aren't any rigid rules.
    I would suggest that you have a look at the discussions in these two threads for a few general ideas :
    Database Design
    conversion from number to character
    If your client requirements keep changing, I would suggest that you keep 8- 10 "spare" columns in your tables - just call them spare1, spare2, etc.. The only purpose of these tables is to allow flexibility in the design - i.e.., in future you can always extend the table to accommodate more fields.
    I have used it a couple of times & found it to be useful - again, this is only a suggestion.
    Regards,
    Sandeep

Maybe you are looking for

  • Left Outer Join Not working in BI 7.0 Infoset

    Hi All, I am working on BI 7.0. I have to create a report where I have to display the Target values from a target DSO against the transactional data (Operational data). I have a DSO where for a “subteam” value target has been set up on different KPIs

  • MySql Connectivity!@?

    hi all, I have a problem connecting my testLink app program to the remote mySQL database(another PC). It seems that there is a link error on my part. Does anybody know whats wrong with my prog. The program code below didnt have any problem compiling

  • Query on master contracts

    hi frnds,   how to group lower level contracts to higher level contract called GK.  means, i have followed all the IMG activities, in the easy access after creating a master contract with VA41 how to add a quantity contract number or other contract n

  • When will iphone 5s be available at apple stores to look at?

    When will Apple stores have the iPhone 5S to look at and play with?

  • Archiving of Vendor Master Data,

    Hi All, I want to Archive Vendor Master data . For this purpose , I used two Archiving Objects - FI_DOCUMNT and FI_ACCPAYB so that I can archive data from tables LFA1 and LFB1. After archiving using FI_DOCUMNT , I tried archiving(write) using FI_ACCP