Best practice when deleting from different table simultainiously

Greetings people,
I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
and getting my head around it.
Is there a tutorial which deals with this topic?
I was wondering the best way to go.
Many Thanks.
Phil
is there a "best practice" method for

Without knowing many details about your specifics... I can suggest a few alternatives -
You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
hth,
v

Similar Messages

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • NullPointerException when deleting from table with no non-primary keys

    We have an object mapped to a table in our database.
    This table has two fields that together form a composite key. When we attempt to perform a delete from this table using the mapped object we receive the following exception and stack trace:
    java.lang.NullPointerException
    at oracle.toplink.descriptors.FieldsLockingPolicy.getAllNonPrimaryKeyFields(Unknown Source)
    at oracle.toplink.descriptors.AllFieldsLockingPolicy.getFieldsToCompare(Unknown Source)
    at oracle.toplink.descriptors.FieldsLockingPolicy.buildExpression(Unknown Source)
    at oracle.toplink.descriptors.FieldsLockingPolicy.buildDeleteExpression(Unknown Source)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildDeleteExpression(Unknown Source)
    at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.buildDeleteStatement(Unknown Source)
    at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.prepareDeleteObject(Unknown Source)
    at oracle.toplink.internal.queryframework.StatementQueryMechanism.deleteObject(Unknown Source)
    at oracle.toplink.queryframework.DeleteObjectQuery.execute(Unknown Source)
    at oracle.toplink.queryframework.DatabaseQuery.execute(Unknown Source)
    at oracle.toplink.publicinterface.Session.internalExecuteQuery(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(Unknown Source)
    at oracle.toplink.tools.profiler.PerformanceProfiler.profileExecutionOfQuery(Unknown Source)
    at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
    at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
    at oracle.toplink.internal.sessions.CommitManager.deleteAllObjects(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
    at oracle.toplink.publicinterface.UnitOfWork.commit(Unknown Source)
    <snip remainder of stack trace>
    The table definition for our table is as follows:
    PK,FK TASK_ID NUMBER NOT NULL
    PK,FK USER_ID NUMBER NOT NULL
    Each field is a foreign key to another table, however this is not a join table for a many to many relationship.
    The mapping definition for the mapped object is as follows:
    public Descriptor buildProConSubscriptionDescriptor( )
    Descriptor descriptor = new Descriptor( );
    descriptor.setJavaClass( mil.usmc.mol.procon.domain.ProConSubscription.class );
    descriptor.addTableName( "PRO_CON_SUBSCRIPTION" );
    descriptor.addPrimaryKeyFieldName( "PRO_CON_SUBSCRIPTION.TASK_ID" );
    descriptor.addPrimaryKeyFieldName( "PRO_CON_SUBSCRIPTION.USER_ID" );
    // Descriptor properties.
    descriptor.useNoIdentityMap( );
    descriptor.setIdentityMapSize( 1000 );
    descriptor.useRemoteNoIdentityMap( );
    descriptor.setRemoteIdentityMapSize( 1000 );
    descriptor.setAlias( "ProConSubscription" );
    descriptor.useAllFieldsLocking( );
    // Query manager.
    descriptor.getQueryManager( ).checkDatabaseForDoesExist( );
    //Named Queries
    // Event manager.
    // Mappings.
    DirectToFieldMapping taskIdMapping = new DirectToFieldMapping( );
    taskIdMapping.setAttributeName( "taskId" );
    taskIdMapping.setGetMethodName( "getTaskId" );
    taskIdMapping.setSetMethodName( "setTaskId" );
    taskIdMapping.setFieldName( "PRO_CON_SUBSCRIPTION.TASK_ID" );
    descriptor.addMapping( taskIdMapping );
    DirectToFieldMapping userIdMapping = new DirectToFieldMapping( );
    userIdMapping.setAttributeName( "userId" );
    userIdMapping.setGetMethodName( "getUserId" );
    userIdMapping.setSetMethodName( "setUserId" );
    userIdMapping.setFieldName( "PRO_CON_SUBSCRIPTION.USER_ID" );
    descriptor.addMapping( userIdMapping );
    return descriptor;
    Any thoughts as to why this is happening?
    Thanks,
    Andrew Lee

    Hi Andrew,
    AllFieldsLocking isn't designed for this since you have an object that's composed entirely of primary data. You should make use of any other of TopLink's locking policies to make this work. For instance, try useChangedFieldsLocking(), pessimistic or any of the optimistic locking strategies available from within TopLink for this to work based on the design you have outlined.
    Darren

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Report using Data from different tables

    Hello,
    I am trying to convert a Cobol batch program to Oracle 6i tabular report.
    The data is fetched from many different tables and there are lots of processing(i.e, based on the value of a column from one table need additional processing from different tables) required to generate the desired columns in the final report.
    I would like to know what is the best strategy to follow in Oracle Reports 6i. I heard that CREATE GLOBAL TEMPORARY TABLE is an option. ( or REF CURSOR ?) I do not know much about its usage. Can somebody guide me about this or any other better way to achieve the result.
    Thank you in advance
    Priya

    Hello,
    There are many, many options available to you, each of which has advantages and disadvantages. This is why it is difficult to answer "what is best?" without alot more details about your specific circumstances.
    In general, you're going to be writing PL/SQL to do any conditional logic that cannot be expressed as pure SQL. It can executed in the database, or it can executed within Reports itself. And most reports developers do some of both.
    As a general rule, you want to send only the data you need from the database to the report. This means you want to do as much filtering and aggregating of the data as is readily possible within the database. If this cannot be expressed as plain SQL queries, then you'll want to create a stored procedures to help do this work.
    Generally, the PL/SQL you create for executing within the report should be focused on control of the formatting, such as controlling whether a field is visible, or controlling display attributes for conditional formatting.
    But these are not hard and fast rules. In some cases, it is difficult to get all the stored procedures you might like installed into the database. Perhaps the dba is reluctant to let you install that many stored procedures. Perhaps there are restrictions when and how often updates can be made to stored procedures in a production database, which makes it difficult to incrementally adjust your reports based on user feedback. Or perhaps there are restrictions for how long queries are allowed to run.
    So, Reports offers lots of options and features to let you do data manipulation operations from within the report data model.
    In any case, Oracle does offer temporary table capabilities. You can populate a temp table by running stored procedures that do queries, calculations and aggregations. And you can define and initiate a dynamic query statement within the database and pass a handle to this query off to the report to execute (ref cursor).
    From the reports side, you can have as many queries as you want in the data model, arranged in any hierarchy via links. You can parameterize and change the queries dynamically using bind variables and lexicals. And you can add calculations, aggregations, and filters.
    Again, most people do data manipulation both in the database and in Reports, using the database for what it excels at, and Reports for what it excels at.
    Hope this helps.
    Regards,
    The Oracle Reports Team --skw                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Deleting from multiple tables where few tables have same column name

    Hi,
    I am new to PL/SQL and need some help. I need to delete data older then X years from some 35 odd tables in my schema and out of those tables 25 tables have same column name on which i can have my "where" clause and rest 10 table have different table names. I am doing something like this :
    declare
    table_list UTL_FILE.FILE_TYPE;
    string_line VARCHAR2(1000);
    tables_count number:=0;
    table_name VARCHAR2(400);
    column_name VARCHAR2(400);
    BEGIN
    table_list := UTL_FILE.FOPEN('ORALOAD','test7.txt','R');
    DBMS_OUTPUT.PUT_LINE(table_list);
    LOOP
    UTL_FILE.GET_LINE(table_list,string_line);
    table_name := substr(string_line,1, instr(string_line,'|')-1);
    column_name := substr(string_line, instr(string_line,'|')+1);
    DBMS_OUTPUT.PUT_LINE(table_name);
    DBMS_OUTPUT.PUT_LINE(column_name);
    IF column_name = 'SUBMISSION_TIME' THEN
    delete from :table_name where to_date(:column_name)<(sysdate-(365*7));
    ELSE
    delete from || table_name || where ( || to_date(column_name) || ) <(sysdate-(365*7));
    END IF;
    END LOOP;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    UTL_FILE.FCLOSE(table_list);
    DBMS_OUTPUT.PUT_LINE('Number of Tables processed is : ' || tables_count);
    UTL_FILE.FCLOSE(table_list);
    END;
    WHERE the text file "text7.txt" contains list of table name and column names separated by a pipe line. But when I execute the above proc it gives error "invalid table name".
    Can something like this be done or is there any other way to execute this task of deletion from 35 tables.
    Thanks.

    Thanks for replies. I don't know what I am doing wrong but still not getting this damn thing work...This is the proc i am running now :
    declare
    table_list UTL_FILE.FILE_TYPE;
    string_line VARCHAR2(1000);
    tables_count number:=0;
    table_name VARCHAR2(4000);
    column_name VARCHAR2(4000);
    code_text VARCHAR2(2000);
    BEGIN
    table_list := UTL_FILE.FOPEN('ORALOAD','test7.txt','R');
    LOOP
    UTL_FILE.GET_LINE(table_list,string_line);
    table_name := substr(string_line,1, instr(string_line,'|')-1);
    column_name := substr(string_line, instr(string_line,'|')+1);
    IF column_name = 'SUBMISSION_TIME' THEN
    DBMS_OUTPUT.PUT_LINE('do nothing');
    ELSE
    code_text:= 'begin delete from'|| (table_name) ||'where to_date' || (column_name) || '<(sysdate-(365*7))';
    Execute Immediate code_text;
    END IF;
    END LOOP;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    UTL_FILE.FCLOSE(table_list);
    DBMS_OUTPUT.PUT_LINE('Number of Tables processed is : ' || tables_count);
    UTL_FILE.FCLOSE(table_list);
    END;
    But it gives following error :
    " ORA-06550: line 1, column 51:
    PL/SQL: ORA-00933: SQL command not properly ended
    ORA-06550: line 1, column 7:
    PL/SQL: SQL Statement ignored
    ORA-06550: line 1, column 68:
    PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following:
    ORA-06512: at line 22 "

  • WBS elements not deleted from PRPS table

    Hi Experts,
    In CJ20N transaction:
    When I delete a project with project profile (ZABC):
      - The project record is completely deleted from 'PROJ' table.
      - The WBS records are completely deleted from 'PRPS' table.
    When I delete another project with different project profile (ZXYZ):
      - The project record is completely deleted from 'PROJ' table.
      - The WBS records are NOT deleted from 'PRPS' table.
    What is the reason for not deleting records from PRPS table?
    Thanks in advance for your valuable answers.

    WBS should be having actuals.

  • HT2480 email not deleting from server when deleted from ipad

    messages are not being deleted from server when deleted from ipad

    Hello,
    You need to tell us what the email service is. BES? BIS -- and, if so, what email provider? If BES, then see your BES admins. If BIS, then expect it to function in accordance with this KB.
    Otherwise, you can try the following steps, in order, even if they seem redundant to what you have already tried (steps 1 and 2 each should result in a message coming to your BB):
    1) Register HRT
    Homescreen > Options > Advanced Options > Host Routing Table > (it does not matter which line is current) > Register Now
    2) Resend Service Books
    KB02830 Send the service books for the BlackBerry Internet Service
    3) Batt Pull Reboot
    Anytime random strange behavior or sluggishness creeps in, the first thing to do is a battery pop reboot. With power ON, remove the back cover and pull out the battery. Wait about a minute then replace the battery and cover. Power up and wait patiently through the long reboot -- ~5 minutes. See if things have returned to good operation. Like all computing devices, BB's suffer from memory leaks and such...with a hard reboot being the best cure.
    Hopefully that will get things going again for you! If not, then you should try deleting and re-adding your BIS conduits for the affected email accounts.
    Occam's Razor nearly always applies when troubleshooting technology issues!
    If anyone has been helpful to you, please show your appreciation by clicking the button inside of their post. Please click here and read, along with the threads to which it links, for helpful information to guide you as you proceed. I always recommend that you treat your BlackBerry like any other computing device, including using a regular backup schedule...click here for an article with instructions.
    Join our BBM Channels
    BSCF General Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • What's the best practice to partition an existing table?

    We are using Oracle 11g EE 11.2.0.3.0. We want to partition an existing table. I found some information from this link [http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php], but I am not sure whether that's the best way. Could anyone share the best practice of partitioning an existing table?
    Thanks.
    Jun

    >
    We want to partition a few related tables. A couple of tables have about 30 million rows, and a couple of other tables have about 10 million rows. The data for those tables is for ~2000 accounts, each of them is identified by a column named "account_id". We want to partition the tables by using the "account_id" as the partition key to gain better performance for queries, so we would probably use hash partition. Please advise.
    >
    1. What evidence do you have that your performance will be any better if you partition those tables?
    2. What evidence do you have that there is anything wrong with the performance of your current queries?
    3. Have you reviewed the actual execution plans for your most important queries to see if there is even any room for improvement?
    If your query uses account_id as a predicate but only pulls a few records from a 'master' table for that key value your performance may actually be worse if you partition that table. That is because using an index range scan Oracle can easily find the ROWIDs of the rows that need to be selected and then easily retrieve them regardless of their physical location.
    If you have queries that do NOT use account_id but currently use indexes the performance of those queries may also be worse if you partition that table. Those indexes will be GLOBAL.
    There are two major implications when you use hash partitioning:
    1. Any query that does NOT include the partitioning key (account_id for your case) will, of necessity, have to use a full table scan if an appropriate index is not available.
    2. You will get NO management benefits such as being able to add/remove partitions that include old data.
    Before you decide to partition a table you should conduct tests and examine the execution plans for your important queries.

  • Fetch data from different tables print them is assigned places

    Hi Friends,
    I have designed one SMART-Form (Receipt).
    I need to fetch data from different tables and the data must be print in assigned places. Please help me how to achieve this requirement. Thanks for your help.
    Best regards,
    Manju.
    Edited by: Alvaro Tejada Galindo on Feb 12, 2008 10:20 AM

    U're right.
    When it creates a smartform it needs to decide when the main data have to be extracted:
    - or in driver program
    - or in smartforms
    The difference can be in the performance, because the program can select the data only once, the smartforms needs to extract them everytime it's called.
    I prefer to use a complex structure for the smartform interface as so all data I need are in only one structure, the problem is the complex structure is harder to be managed.
    Max

  • Could not delete from specified table?

    hi all,
    i'm getting this error could not delete from specified table when i execute a delete on dbf file via JDBC-ODBC bridge, when i execute an update table command i get operation must use an updatable query, but i'm not updating the query returned from select statement.
    Does anyone have similar problem before??

    Hi,
    my code is below:
    try {     
    conn=DriverManager.getConnectio("jdbc:odbc:sui","","");
    stmt = conn.createStatement();     
    int r= stmt.executeUpdate("Update Alarms set ACK=True WHERE DT = { d '" + msgdate + "' } AND TM= '" + msgtime + "'");
    System.out.println(r+" records updated in Alarms ");
    stmt.close();
    conn.close();
    stmt=null;
    conn=null;
    catch (NullPointerException e) {System.out.println(e.getMessage());}
    catch (SQLException e) {System.out.println(e.getMessage());}
    I have an ODBC datasource called sui and a table called alarms, i need to write into the column Ack to true when DT equals msgdate and TM equals msgtime, the error returned is Operation must use an updatable query.
    I believe the SQL has no problem because there is no SQL syntax error and i'm not updating any resultset returned from select query.
    When i delete, i get could not delete from specified table. Anyone has any idea wh'at's wrong?

  • How to display multiple data from different table in one table? please help

    Hi
    I got sun java studio creator 2(the separate installation not the one in the net beans)....
    My question is about displaying data that have been taken from the database.... I know how to display data in a table(just click on the table "bind data" )... but my question is that:
    when i want to use a sql statement that taken the data from different table...
    how can i display that data in the table(that will be shown in the web) ??? when i click bind data on the table i can only select one table i can't select more than one....
    Note:
    1) i'm using the rowset for displaying the data in the table, since the sql statement is depending on a condition(i.e. select a from b where c= ? )...
    2) i mean by different table is that( i.e. select a from table1,table2 )..
    thanks in advance...

    Hi,
    937440 wrote:
    Hi every one, this is my first post in this portal. Welcome to the forum!
    Be sure to read the forum FAQ {message:id=9360002}
    I want display the details of emp table.. for that I am using this SQL statement.
    select * from emp where mgr=nvl(:mgr,mgr);
    when I give the input as 7698 it is displaying the corresponding records... and also when I won't give any input then it is displaying all the records except the mgr with null values.
    1)I want to display all the records when I won't give any input including nulls
    2)I want to display all the records who's mgr is null
    Is there any way to incorporate to include all these in a single query..It's a little unclear what you're asking.
    The following query always includes rows where mgr is NULL, and when the bind variable :mgr is NULL, it displays all rows:
    SELECT  *
    FROM     emp
    WHERE     LNNVL (mgr != :mgr)
    ;That is, when :mgr = 7698, it displays 6 rows, and when :mgr is NULL it displays 14 rows (assuming you're using the Oracle-supplied scott.emp table).
    The following query includes rows where mgr is NULL only when the bind variable :mgr is NULL, in which case it displays all rows:
    SELECT     *
    FROM     emp
    WHERE     :mgr     = mgr
    OR       :mgr       IS NULL
    ;When :mgr = 7698, this displays 5 rows, and when :mgr is NULL it displays 14 rows.
    The following query includes rows where mgr is NULL only when the bind variab;e :mgr is NULL, in which case it displays only the rows where mgr is NULL. That is, it treats NULL as a value:
    SELECT     *
    FROM     emp
    WHERE     DECODE ( mgr
                , :mgr, 'OK'
                )     = 'OK'
    ;When :mgr = 7698, this displays 5 rows, and when :mgr is NULL, it displays 1 row.

  • Best practice for exporting from iMovie '08 to iDVD

    I am looking to find out what is the best practice for exporting from iMovie '08 to iDVD. I have read the other postings that give the basic howto (export to Media Browser then select the video in iDVD). However, my question is a little more technical. I have 1080i HD projects. I am interested in burning them to DVD in the best possible quality. What setting should I be using when I publish to Media Browser?
    I am wondering about quality loss due to more than one conversion/compression. I suspect that when I export to the Media Browser then this is occurring. If I am not mistaken iMovie is using something like H.264 for this. Then, when I run iDVD I suspect it will it do another conversion/compression, I think to get to MPEG2. Not only could this result in a loss of quality but also it will take extra time. I am interested to know what others think about this.
    Finally, I am looking to create DVDs for a lot of video. I am wondering if there are any USB or firewire hardware devices out there that could speed up the compression. I use the Elgato Turbo.264 when I want to encode to H.264 but I wonder if there is something similar for DVD creation.
    Thanks in advance.

    the standards for videoDVD are 720x480, and usually mpeg2 encoded..
    so, your HiDef project HAS to be 'downsampled' somehow..
    I would Export with Qucktime/apple intermediate => which is the 'format' your project is allready, and you avoid any useless 'inbetween encoding'..
    iDVD will 'swallow' this huge export file - don't mind: iDVD cares for length, not size.
    iDVD will then convert into DVD-standards..
    you can 'raise' quality, by using projects <60min - this sets iDVD automatically to highest technical possible bitrate
    hint: judge pic quality on a DVDplayer + TV.. not on your computer (DVDs are meant for TVdelivery)

  • Best Practice in Upgrade from ECC 5.0 to ECC 6.0

    Dear All,
    Can someone help in looking for Best practice in Upgrade from ECC 5.0 To ECC 6.0 Project from Functional FI and CO Side.
    Thanks

    Moved to a different forum.

Maybe you are looking for