Data warehousing question/best practices

I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
1) drop the table in the destination database.
2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
4) After data import, create primary key and indexes.
Questions:
1) SQL Developer included the following lines when generating the table creation script:
<table creation DDL commands>
then
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TBLSPC_PGROW"
it generated this code snippet for the table, the primary key and every index.
Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
-- do I need the following four lines?
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "TBLSPC_IGROW"
2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
Thanks in advance,
Carl

I would strongly suggest not dropping and recreating tables every day.
The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
Justin

Similar Messages

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • HR Master Data conversion-SAP Best Practices

    Hello there,
    We would like to use the SAP Best Practices for HR Master Data conversion. 
    Now we want leverage the SAP Best practices to convert the Master data.  Could any one explain in detail how to do the same.
    How to install the Best Practices only to the extent of the Data conversion.  We don't want to use the rest of the Best Practicies.
    I know there are some notes out there. 
    Any help on the above highly appriciated.

    HI,
    I am not v sure if u can install only the required component. But there would be some pre requisites for every installation.
    It will be clearly mentioned in the base line
    Also Check if its available for the country which ur currently working...
    Use the ECATT: Test Configuratio & Test Scripts
    Pls revert in case u need further more details..

  • ISE policy creation question - best practices

    Ok, I am a rookie ISE user here and am trying to learn as I go. I have a 802.1x policy for our corporate users on both wired and wireless and a wireless guest policy that redirects to the guest portal to enter credentials created in the sponsor portal. The corporate user has access to corporate resources and the guest basically has access to just the internet.
    I need to make what I am calling a Vendor policy that is basically a hybrid of the corporate user and the guest user. These would be vendors that are on-site to assist with programming and need access longer than what the guest account can be created for. This would also have specific ACLs that grant them access to the specific resources they would nee. I would like to tie this into AD authentication since they have an AD account created to be able to access those corporate resources in most cases. My first question is do I have a single policy that is tweaked as vendors come and go or do I simply create a specific policy for each vendor? My second question is do I or should I create unique SSIDs for each vendor?
    As I said I am just now getting into getting ISE configured. I am just not sure of what is considered a best practice or what is considered a secure way to may things happen. In regards to the policies I have created, they work but I think I have a couple holes to address.
    Thanks ...
    Brent

    Mostly makes sense. I have the AD part just need to get an AD group created for my test subject.
    I created an Endpoint Identity Group to place the vendors devices into so that we can allow laptop to connect but not phone. Got that.
    I think I can handle the Authorization Profile. It will be something like if VendorAsset and AD1:ExternalGroups Equals VendorADGroup then VendorPermissions. VendorPermissions would be the ACL that limits where they can go. I also need to create a non 802.1x based SSID as well and add this to the Authorization profile but can still be generic enough to be useable by all vendors.
    I think it is my Authentication rules that I need to modify for Vendor as my Corporate based policies use Dot1x and I need a policy that does not use dot1x. Right?

  • Archiving data retention period (best practice)

    Hi,
    Can anybody provide the standard FI-CO data retention period I mean for how long data from the following table has to be retained: 
    BKPF, BSAD, BSAK, BSAS, BSIS
    Best business practice.
    Thanks in advance
    Joseph

    Hi,
    I would not recommend to change this setting. One major problem you get will be that the archiving function will delete data from database (which has been transferred to the archive). So far so good, that´s one of the archiving features
    But it sometimes happens that you have to open an already closed fiscal year again. And if you open this last closed fiscal year again, the realtime transactions could fail now if data (transactions, etc.) is missing which exists only in the archive at that point of time.
    By the way, you can have a look at SAP note 389920, in there the archiving functionality is documented for FI-AA application.
    Regards,
    Markus

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Session question; best practice

    Hi,
    One of our high profile application's queries/updates are served to user sessions. But we wanted to improve user query performance and reduce general database activity.
    This piece of application cause an auto refresh to execute every 60 seconds. These queries execute against order tables looking for statuses on active orders, are user specific, and in some cases are not optimally tuned producing very high database buffer get and disk read activity. On average, 1,500 executions representing various flavors of these queries are executed hourly.
    my questions are:
    1) how can we get max performance ?
    2) can we cache these queries for like every 30 secs ?
    3) how can we cache ? so that user sessions would access the cache.
    -sharma

    well, you could load the data and put it in the application scope (in memory) with a timeout time so that it's not used after however long, in which case, a request would have to go to get the newer data from the DB.

  • Ffmpeg question - best practice

    I have a script i saved and have used for a while without any issues;
    #!/bin/bash
    for i in *.mkv
    do
    ffmpeg -i "$i" -acodec ac3 -vcodec copy "${i%.mkv}.mp4"
    done
    which gives me;
    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
      Stream #0:1 -> #0:1 (ac3 (native) -> ac3 (native))
    My question is, is it better for me to copy the audio instead and is AC3 -> AC3 going to give me an issue?
    Sometimes i get AAC souce audio which is why i specify AC3

    psjbeisler wrote:is AC3 -> AC3 going to give me an issue?
    No, other than wasting time re-encoding. You probably wouldn't notice a difference in quality. If something weird happens, like a change in channel layout, then it should be reported upstream.
    psjbeisler wrote:Sometimes i get AAC souce audio which is why i specify AC3
    AAC is the most common audio format for MP4 container, so stream copying it would be the best option.
    qubodup wrote:
    You could check what the codec is:
    codec=`ffprobe video.mkv 2>&1 >/dev/null |grep Stream.*Audio | sed -e 's/.*Audio: //' -e 's/[, ].*//'`
    You can avoid the redirection, grep, and sed (see FFmpeg Wiki: FFprobe Tips).
    $ ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of default=nw=1:nk=1 input.mkv
    aac
    Note that only the first audio stream will be probed in this example. If there are others they will be ignored. Change "-select_streams a:0" to "-select_streams a" if you want to list all.
    Last edited by DrZaius (2015-04-18 23:41:57)

  • Collapsed Data Center Tier - Best Practice

    Hey guys,
    I'm working with a company who's doing a Data Center build-out. This is not a huge build out and I don't believe I really need a 2 tier design (access, core/aggregation). I'm looking for a 1 tier design. I say this because they only really have one rack of hosts - and we are not connected to a WAN or campus network - we are a dev shop (albeit a pretty damn big dev shop) who hosts internet sites and web applications to the public. 
    My network design relies heavily on VRF's. I treat every web application published to the internet as it's town "tenant" with one leaked route which is my managment network so I have any management servers ( continues deployment, monitoring, etc...) sitting in this subnet that is leaked. Each VRF has their own route to a virtual firewall context of their own and out to the internet. 
    Right now we are in a managed datacenter. I'm going to be building out their own switching environment utilizing the above design and moving away from the managed data center. That being said I need to pick the correct switches for this 1 tier design. I need a good amount of 10gbe port density (124 ports minimum). I was thinking about going with 4 5672UP or 4 C3064TQ-10GT - these will work as both my access and core (about 61 servers, one fiber uplink to my corporate network, and one fiber uplink to a firewall running multiple device contexts via multiple vlans) 
    That being said - With the use of VRFs, VLAN, and MP-BGP (used to leak my routes) what is the best redundancy topology for this design. If I was using catalyst 6500's I would do VSS and be done with it - but I don't believe vPC on the nexus switches traffic and is really more for a two tier model (vPC on two cores, aggregation/access switch connects up to both cores but it looks like one.) What I need to accomplish sounds to me that I'm going to be doing this the old fashion way , running a port channel between each switch, and hopefully using a non STP method to avoid loops. 
    Am I left with any other options? 

    ISP comes into the collapsed core after a router. A specific firewall interface (firewall is in multi context mode) sits on the "outside" vlan specific to each VRF. 

  • HCM Master data upload sequence & best practices

    Experts,
    What would be the best method and recommended sequence to upload HCM master data into the below infotypes?
    0,1,2,3   6,7,8,9   207,208,209,210  (payroll)
    21,167,168,169,170,171 and 3.series (Benefits)
    PA0795
    PA2006
    PA2012
    PBO795
    T529T
    T530T
    Please advice.
    Thanks in advance.
    NW

    Hi,
    The best method to mass upload is LSMW
    the sequence will be
    First you need to create the master data so the Action tables need to be configured first
    T529T
    T530T
    also other related PA config need to be completed
    Then when you will start uploading data the sequence will be
    0, 1, 2 , 6, 7, 8, 9 , 207, 208, 209, 210, 21, 171, 167, 168 169, 170, 2006, 2012, 795
    The benefits features (BAREA, BENGR, BSTAT) also need to be configured with all other benefits related config prior to uploading benefits information
    Some other imp. features like LGMST, TARIF, ABKRS, SCHKZ etc.. also need to be configured prior to uploading the employee master data.
    Hope this will be of help
    Regards,
    Guds

  • Redirection question - best practice

    I have a managed session scoped bean, named UserBean, which as it's name implies stores user information. Now, if the session has expired (or was never created), a lot of it's methods will return null values and will result in an error. What I'd like to know is the best way to redirect to a login page if the UserBean is null. My first idea was the following:
    <navigation-rule>
            <from-view-id>*</from-view-id>
            <navigation-case>
                <from-outcome>#{UserBean == null}</from-outcome>
                <to-view-id>/login.xhtml</to-view-id>
                <redirect/>
            </navigation-case>
        </navigation-rule>However it didn't work. Am I onto something? If not -- whats the best solution?
    I appreciate your help.

    ServletRequest is an interface [1]. In a HTTP servlet environment the ServletRequest instance in the Filter is an implementation of HttpServletRequest [2]. So cast it back.
    [1] http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletRequest.html
    [2] http://java.sun.com/javaee/5/docs/api/javax/servlet/http/HttpServletRequest.html

  • Saving zip code data with PHP - best practices

    I have built my client an application that analyzes uploaded
    zip codes for
    matches with a standard set of zips. These uploaded zips can
    be one at a
    time, or a copy/paste from an XLS file (just 5 digit ZIPs).
    They are now asking me to save these uploaded zips, and I am
    wondering what
    would be the best way to do that. My two obvious choices are
    1. Write them to an external text file with a
    programmatically generated
    name, and enter the name in the database, keyed to the user.
    2. Write the zips themselves into a glob field in the
    database.
    I'm inclined to the former, since I don't think there would
    ever need to be
    any further manipulation of these zip codes, but what do you
    think? Are
    there other choices I may have overlooked?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Dang - sorry. Wrong forum.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:fvfi5j$ig7$[email protected]..
    >I have built my client an application that analyzes
    uploaded zip codes for
    >matches with a standard set of zips. These uploaded zips
    can be one at a
    >time, or a copy/paste from an XLS file (just 5 digit
    ZIPs).
    >
    > They are now asking me to save these uploaded zips, and
    I am wondering
    > what would be the best way to do that. My two obvious
    choices are -
    >
    > 1. Write them to an external text file with a
    programmatically generated
    > name, and enter the name in the database, keyed to the
    user.
    > 2. Write the zips themselves into a glob field in the
    database.
    >
    > I'm inclined to the former, since I don't think there
    would ever need to
    > be any further manipulation of these zip codes, but what
    do you think?
    > Are there other choices I may have overlooked?
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    > ==================
    >
    >

  • Database Primary Key Question - Best Practices

    I posted this in the ADDT forum, but I imagine I'll get more
    responses here:
    All you database developers - how do you deal with primary
    keys? Do you
    ALWAYS use an AutoIncrement/AutoNumber? Or only sometimes? Is
    there an
    argument to NOT use AutoIncrement? I know how I create
    databases and how
    I usually do things. I know how a few of my colleagues work.
    But how
    about the rest of the world? (Research for a MS Access book I
    am
    involved with.)
    Alec
    Adobe Community Expert

    .oO(Alec)
    >I posted this in the ADDT forum, but I imagine I'll get
    more responses here:
    >All you database developers - how do you deal with
    primary keys? Do you
    >ALWAYS use an AutoIncrement/AutoNumber?
    No.
    >Or only sometimes? Is there an
    >argument to NOT use AutoIncrement?
    AUTO_INCREMENT is a proprietary MySQL feature. For some
    people this
    might be an argument against it, but doesn't have to. Every
    DBMS has its
    own special features. You just have to decide whether you
    want to keep
    your code/queries as portable as possible or want to get the
    most out of
    your DB. Usually I prefer performance/features over
    portability, simply
    because for me and my projects it's very unlikely that I have
    to change
    the DBMS. I've chosen MySQL for good reasons and will stay
    with it for
    quite a while.
    >I know how I create databases and how
    >I usually do things. I know how a few of my colleagues
    work. But how
    >about the rest of the world? (Research for a MS Access
    book I am
    >involved with.)
    It always depends on the table itself, what data it contains,
    what I
    want to do with it and also some personal preferences. In n:m
    tables for
    example there's no need for an extra numeric PK, since the
    entire record
    already is the PK, built from two or more FKs.
    But if I need a numeric PK, I usually use sequences. Some
    DBMS support
    them natively, in MySQL they can be emulated with an extra
    table. It
    simply means, that the used PK number is generated _before_
    the record
    itself is inserted. For me and my framework this has some
    advantages
    (makes the internal work a bit easier), but of course in
    other cases an
    AUTO_INCREMENT might be more appropriate.
    So IMHO there's no general solution. If an AUTO_INCREMENT or
    something
    similar fits your needs, you should use it. I don't see a
    real problem
    with that.
    Micha

  • MiniDV work flow question-best practice

    I've got a client with a Cannon DC100 miniDV camcorder. This unit does not seem to have a firewire or USB port.
    What I have are 17 of these little puppies that I need up get into iMovie so i can teach him the iMovie basics.
    What I think I need is some freestanding reader that plugs into the firewire/usb port.
    Is there a better way?
    Thanks

    Michael:
    Take a look at this camera here:
    http://www.camcorderinfo.com/content/Canon-DC100-Camcorder-Review.htm
    It records in MPEG2 format into miniDVD. You can insert the miniDVDs into your G5's drive and take aout and convert the movies to DV. As far as I know, you can find problems inserting miniDVD/CDs in slot loading drives, but not in a standard one.
    The camera has a AV output but you need a A/D converter to digitalize the video. If your customer wants to learn to edit his home videos, he must change to any miniDV (tape) consumer camera in place of getting any other hardware to work with this one.
      Alberto

Maybe you are looking for

  • FLAT FILE Datasource SERIOUS Problem

    Hi I want to read a header information in csv file to fill the FLAT FILE DATASOURCE structure. I cannot read it by default as I have to IGNORE THE HEADER ROW otherwise it wont come up correctly along with otherdetail  data that is below header row du

  • Bug with ResultSet.wasNull() function

    The wasNull() function is incorrectly reporting false when null values are read for doubles. This is in conjunction with the Oracle 10gr2 classes12.jar driver. The case where I am seeing this is for a left outer join SQL statement. Double columns on

  • Compare month sales with previous year same month sales

    hi all. we need to compare given month of this year with same month of last year. we need to compare sales quantity and value. the query is on 0RT_C40 multiprovider. We have tried all options using 0Calmon, calday, fiscal year... but whenever we spec

  • What is the effect of the planned order after it is converted to production

    Hello Gurus, 1.There was a planned order with wrong date 2041 instead of 2011 2. plannedorder is already converted to production order and is visble in assignment tab of the production order 3.when I  am tryng to display the planned order it gives er

  • JDeveloper-Is it suitable for enterprise-level systems?

    I develop 'traditional' Oracle forms/reports applications in client-server systems and I am considering using Java for future application development. Such applications are usually keyboard-intensive, having screens capable of rapid data entry as wel