Best practice for saving data in SQL server

Hi all
Hoping for a little help on this question. 
If i have a list of fields ex. (name,address,postal,phone etc.). Then i create a webform/task
to gather some of theese fields (name, postal), then i make another webform/task to gather some other fields (address, phone). 
What is best practice in the SQL server for storing returning values.
Is it: 
1. to make a table with all the fields in the list + taskid. Theese fields could be in
correct format (number, date etc.). And all answers to all tasks is inserted into this table. 
2. Make a value table for each field with the correct type + task id. So all name values
are stored in the "name value table" with the task id.
How would i select values from a certain task from this kind of setup?
3. ??
Best regards
Bo

Hi Atul
Thanks for your reply, can you elaborate a bit further on this, since i am still a little confused. 
Let me try to explain my scenario at bit more:
Say instead that it is 50 fields in a table with their own unique ID, maybe an answer table
would look like this:
taskid | field_1 | field_2 | field_3 | field 4 | field_n
So no matter which fields the user fillsout it will can be stored in one table. 
QUestion is, is this a good way to do it? and how do i select from this table using a join
As far as i know you cant name columns in a table with just numbers, which would have been
great, giving the columnnames the field_id.
OR
Would you have 50 tables each with a field_id and a value (of correct type) ?
And could you give me an example of how to bind and select from this kind of structure ?
Also inserting into 50 tables on a save.... is that the right way to go? :)
Best regards
Bo

Similar Messages

  • JSR 168 best practice for saving inter-portlet state

    The portlet specification doesn't yet cover inter-portlet communication. Until
    it does, what is the best practice for saveing state so that multiple portlets
    can use the same data?

    The portlet session is layered on top of the HTTP session except for
    attribute scoping. So, all portlets in a webapp share the same
    HTTP/portlet session for a given client.
    All APPLICATION_SCOPEd portlet session attributes are automatically
    available to other portlets. With PORTLET_SCOPEd attributes, portlet
    containers namespace attribute names, so it would be hard to get
    attributes by name (but still possible).
    Subbu
    Chris Jennings said the following on 10/14/2003 09:17 AM:
    If I set an attribute in a PortletSession, though, it isn't visible form another
    PortletSession... right? Please tell me that's wrong ;-) If that's right, how
    can I have portlet B get to something set by portlet A?
    Subbu Allamaraju <[email protected]> wrote:
    Chris,
    For sharing transient state, the only possible mechanism in V1.0 is to
    use sessions.
    Subbu

  • Best practice for heirachical data

    First off, I have to say that JMX in Java 6 is terrific stuff. Bundling jconsole in with Java has made JMX adoption so much easier for us.
    Now, to my question. We have read-only hierarchical data (think a DOM tree) that we would like to publish via JMX. What is the best practice? We see two possibilities:
    1. Publish each node of the tree with it's own object name and type. This will allow jconsole to display the information in the tree control.
    2. Publish just the root of the tree with an object name and type and then use CompositeType to describe the nodes of the tree. This means you look at the tree in the "Attribute Value" panel of jconsole.
    Is there any best practices for such data? We have implemented #2 and it works but we are wondering if long term this might lead to unforeseen consequences.
    Thanks in advance.
    --Marty                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Path,
    I did go with #1 and it worked out great. Every node in our tree is an ObjectName node. Works very well for us.
    --Marty                                                                                                                                                                                                                                                                   

  • Best practice for sharing data with model window

    Hi team,
    what would the best practice for sharing data with a modal
    window be ? I use a modal window to display record details from a
    record list, but i am not quite sure how to access the data from
    the components in the main application in the modal window.
    Any hints would be welcome
    Best
    Frank

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Best practices for initial data loads to MDM

    Hi,
       We need to load more than 300000 vendors from SAP into MDM production repository. Import server might take days to load that much if no error occurs.
    Are there any best practices for initial loads to MDM available? What considerations must be made while doing the initial loads.
    Harsha

    Hello Harsh
    With SP05 patch1 there is a file aggregation functionality in the import port. Is is supposed to optimize the import performance.
    BTW, give me your mail address and I will send you an idoc packaging paper for MDM.
    Regards,
    Goekhan

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • Best Practices for Remote Data Communication?

    Hello all
    I am developing a full-fledged website in Flex 3.4 and Zend Framework, PHP. I am using the Zend_AMF class in Zend framework for communicating the data to remote server.
    I will be communicating to database in the following way...
    get data from server
    send form data to server
    send requests to server to get data in response
    Right now I have created just a simple login form which just sends two fields username and password in the method in service class on remote server.
    Here is a little peek into how I did that...
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
      <mx:RemoteObject id="loginService" fault="faultHandler(event)" source="LoginService" destination="dest">
        <mx:method name="doLogin" result="resultHandler(event)" />
      </mx:RemoteObject>
      <mx:Script>
        <![CDATA[
          import mx.rpc.events.ResultEvent;
          import mx.controls.Alert;
          private function resultHandler(event:ResultEvent):void
            Alert.show("Welcome " + txtUsername.text + "!!!");
        ]]>
      </mx:Script>
      <!-- Login Panel -->
      <mx:VBox>
        <mx:Box>
          <mx:Label text="LOGIN"/>
        </mx:Box>
        <mx:Form>
          <mx:FormItem>
            <mx:Label text="Username"/>
            <mx:TextInput id="txtUsername"/>
          </mx:FormItem>
          <mx:FormItem>
            <mx:Label text="Password"/>
            <mx:TextInput id="txtPassword" displayAsPassword="true" width="100%"/>
          </mx:FormItem>
          <mx:FormItem>
          <mx:Button label="Login" id="loginButton" click="loginService.doLogin(txtUsername.text, txtPassword.text)"/>
          </mx:FormItem>
        </mx:Form>
      </mx:VBox>
    </mx:Application>
    This works fine. But if I create a complicated form which has many fields then it would be almost unbearable to sent each fields as an argument of a function.
    Another method that can be used is using HttpService which supports XML like request and response.
    I want to ask what are best practices in Flex when using remote data communication on a large scale? Like may be using some classes or objects which store data? Can somebody guide me on how to approach data storing?
    Thanks and Regards
    Vikram

    Oh yes, I have done study about Cairngorm, haven't really applied it though. I thought that it helps in separating the data models, presentation and business logic into various layers.
    Although what I am looking for is something about data models may be?
    Thanks and Regards
    Vikram

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Obiee 11g : Best practice for filtering data allowed to user

    Hi gurus,
    I have a table of the allowed areas for each user.
    I want to show only the data facts associated with these allowed areas.
    For instance my user scott can see France and Italy data.
    I made a variable session. I put this session variable in a filter.
    It works ok but only one value (the first one i think) is taken in account (for instance, with my solution scott will see only france data).
    I need all the possible values.
    I tried with the row wise parameter of the variable session. But it doesn't work (error obiee).
    I've read things on internet about using stragg or valuelistof but neither worked.
    What would be the best practice to achieve this goal of filtering data with conditions by user stored in database ?
    Thanks in advance, Emmanuel

    Check this link
    http://oraclebizint.wordpress.com/2008/06/30/oracle-bi-ee-1013332-row-level-security-and-row-wise-intialized-session-variables/

  • Best practice for extracting data to feed external DW

    We are having a healthy debate with our EDW team about extracting data from SAP.  They want to go directly against ECC tables using Informatica and my SAP team is saying this is not a best practice and could potentially be a performance drain.  We are recommending going against BW at the ODS level.  Does anyone have any recommendations or thoughts on this?

    Hi,
    As you asked for Best Practice, here it is in SAP landscape.
    1. Full Load or Delta Load data from SAP ECC to SAP BI (BW): SAP BI understand the data element structure of SAP ECC, and delta mechanism is the continous process of data load from a SAP ECC (transaction system) to BI (Analytic System).
    2. You can store transaction data in DSOs (granular level), and in InfoCubes (at a summrized level) within SAP BI. You can have master data from SAP ECC coming into SAP BI separately.
    3. Within SAP BI, you SHOULD use OpenHub service to provide SAP BI data to other external system. You must not connect external extractor to fetch data from DSO and InfoCube to target system. OpenHub service is the tool that faciliate data feeding to external system. You can have Informatica to take data from OpenHubs of SAP BI.
    Hope I explain to best of your satisfaction.
    Thanks,
    S

  • Best Practice for Master Data Reporting

    Dear SAP-Experts,
    We face a challenge at the moment and we are still trying to find the right approach to it:
    Business requirement is to analyze SAP Material-related Master Data with the BEx Analyzer (Master Data Reporting)
    Questions they want to answer here are for example:
    - How many active Materials/SKUs do we have?
    - Which country/Sales Org has adopted certain Materials?
    - How many Series do we have?
    - How many SKUs below to a specific season
    - How many SKUs are in a certain product lifecycle
    - etc.
    The challenge is, that the Master Data is stored in tables with different keys in the R/3.
    The keys in these tables are on various levels (a selection below):
    - Material
    - Material / Sales Org / Distribution Channel
    - Material / Grid Value
    - Material / Grid Value / Sales Org / Distribution Channel
    - Material / Grid Value / Sales Org / Distribution Channel / Season
    - Material / Plant
    - Material / Plant / Category
    - Material / Sales Org / Category
    etc.
    So even though the information is available on different detail  levels, the business requirement is to have one query/report that combines all the information. We are currently struggeling a bit on deciding, what would be the best approach for this requirement. Did anyone face such a requirement before - and what would be the best practice. We already tried to find any information online, but it seems Master data reporting is not very well documented. Thanks a lot for your valuable contribution to this discussion.
    Best regards
    Lukas

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Best Practices for Loading Data in 0SD_C03

    Hi, Guru, I want to know which is the best practice to have information about Sales, billing, delivery. I know it has this Datasources.
    • Sales Order Item Data - 2LIS_11_VAITM
    • Billing Document Data: Items - 2LIS_13_VDITM
    • Billing Document Header Data - 2LIS_13_VDHDR
    • Sales-Shipping: Allocation Item Data - 2LIS_11_V_ITM
    • Delivery Header Data - 2LIS_12_VCHDR
    • Delivery Item Data - 2LIS_12_VCITM
    • Sales Order Header Data - 2LIS_11_VAHDR
    Do I have to load all this Datasource to Infocube 0SD_C03 or I have to create copy of 0SD_C03 to mach with each Datasources.

    Hi.
        If you just want to statistic the amount or quantity of the sales process,I suppose you to create 3 cubes and then use a multi provider to integrated those 3 cubes you created.for example:
        2LIS_11_VAITM  -> ZSD_C01
        2LIS_12_VCITM -> ZSD_C02
        2LIS_13_VDITM -> ZSD_C03
    In this scenario,you can enhance the 2lis_12_vcitm and 2lis_13_vditm with sales order data,such as request delivery date etc..and then create a Multiprovider such as ZSD_M01.
    Best Regards
    Martin Xie

  • Best practice for encrypting data in CRM 2013 (other than the fields it already encrypts)

    I know CRM 2013 can encrypt some values by default, but if I want to store data in custom fields then encrypt that, what's the best practice?  I'm working on a project to do this through a javascript action that when triggered from a form would reference
    a web service to decrypt values and a plugin to encrypt on Update/Create, but I hoped there might be a simpler or more suggested way to do this.
    Thanks.

    At what level are you encrypting?  CRM 2013 supports encrypted databases if you're worried about the data at rest.
    In transit, you should be using SSL to encrypt the entire process, not just individual data.
    you can use field-level security to not display certain fields to end users of a certain type if you're worried about that.  It's even more secure than anything you could do with JS, as the data is never passed over the wire.
    Is there something those don't solve?
    The postings on this site are solely my own and do not represent or constitute Hitachi Solutions' positions, views, strategies or opinions.

  • Best practice for saving and recalling objects from disk?

    I've been using the OOP features of LabVIEW for various projects lately and one thing that I struggle with is a clean method to save and recall objects.
    Most of my design schemes have consisted of a commanding objects which holds a collection of worker objects.  Its a pretty simple model, but seems to work for some design problems.  The commander and my interface talk to each other and the commander sends orders to his minions in order to get things done.  For example, one parrent class might be called "Data Device Collection" and it has a property that is an array of "Data Device" objects.
    The Data Device object is a parent class and its children consist of various data devices such as "DAQmx Device", "O-Scope Device", "RS-232 Device", etc.
    When it comes to saving and loading data, the commanding class's "Save" or "Load" routine is called and at that time all of the minions' settings are saved or recalled from disk.
    My save routine is more-or-less straight forward, although it still requires an overwriting "Save" and "Load" vi.  Here is an example:
    It isn't too bad in that it is pretty straight forward and simple and there also would be no changes to this if the data structure of the class changed at all.  It also can save more generalized settings from it's parrent's class which is also a good feature.  What I don't like is that it looks essentially the same for each child class, but I'm at a loss on an effective way to move the handling of the save routing into the parent class.
    The load routine is more problematic for me.  Here is an example:
    Again, the desirability of moving this into the parent class would be awesome.  But the biggest complaint here is that I can't maintain my dynamic dispatch input-output requirements because the object that I load is strictly typed.  Instead I have to rely on reading the information from the loaded object and then writing that information to the object that exists on the dynamic dispatch wire.  I also dislike that unlike my Save Routine, I will need to modify this VI if my data structure of my object changes.
    Anyway, any input and insight would be great.  I'm really tired of writing these same VIs over-and-over-and-over again, and am after a better way to take care of this in the parent class by keeping the code generalized but still maintain the ability to bring back the saved parameters of each of the children classes.
    Thanks for your time.

    I'm with Ben. Don't rely on the current ability to serialize an object. Create a save method and implement some form of data persistence there. If you modify your class you might be disappointed when you cannot load objects you previously saved. It mostly works but as soon as you reset the version information in the class, you can no longer load the old objects. This is fine if you know how to avoid resetting the history. One thing that will do this is if you move the class into or out of a library. It becomes a new class with version 1.0.0 and it no longer recognizes the old objects.
    [Edit:  I see that you are just writing to a binary file. I'm not sure you can load older objects anyway using that method but I have never tried it.]
    This will not help you right now but there are plans for a nice robust API for saving objects.
    =====================
    LabVIEW 2012

Maybe you are looking for

  • Web Dynpro SRM 7.0 - Access Shopping Cart One Screen User Default Settings

    Hi all, In my current project I have a requirement which includes to validate the user default values on shop on one screen functionality, my question is, how can I access to this information? In web dynpro there's no reference to this information an

  • Unmountable Boot Volume installing Windows XP

    Hi, I'm trying to install Windows XP SP2 on my New Mac Mini. I run boot camp, create the partition, insert the Win XP disc. The machine then reboots and enters the initial phase of the installation, however before it gets to the screen where you choo

  • ITunes music will play on iTunes but will not play in my iPod

    I downloaded music from iTunes. It plays on my computer and shows up on my iPod. However, it will not play. When I attempt to play the iPod says "now playing" skipping all 25 tracks. The music is in the purchased, artist, music, etc. folders. Any tho

  • Caching problem in jsps

    I am facing a problem with the caching of the jsp .I am using struts in my application and I have 5 tabs at the top of my page .When I click a tab I will change the image source by using a variable from the attribute .but the problem is often when I

  • Move the data in  select options to internal table

    I have  the code as in the fillowing SELECT-OPTION:S_MATNR FOR MARA-MATNR. DATA:BEGIN OF IT_MATNTR OCCURS 0, MATNR LIKE MARA-MATNR, END OF IT-MATNR. NOW HOW CAN I ADD THE MATNER VALUES IN THE SELECT-OPTIONS IN  INTERNAL TABLE