Select max date from a table with multiple records

I need help writing an SQL to select max date from a table with multiple records.
Here's the scenario. There are multiple SA_IDs repeated with various EFFDT (dates). I want to retrieve the most recent effective date so that the SA_ID is unique. Looks simple, but I can't figure this out. Please help.
SA_ID CHAR_TYPE_CD EFFDT CHAR_VAL
0000651005 BASE 15-AUG-07 YES
0000651005 BASE 13-NOV-09 NO
0010973671 BASE 20-MAR-08 YES
0010973671 BASE 18-JUN-10 NO

Hi,
Welcome to the forum!
Whenever you have a question, post a little sample data in a form that people can use to re-create the problem and test their ideas.
For example:
CREATE TABLE     table_x
(     sa_id          NUMBER (10)
,     char_type     VARCHAR2 (10)
,     effdt          DATE
,     char_val     VARCHAR2 (10)
INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
     VALUES     (0000651005, 'BASE',    TO_DATE ('15-AUG-2007', 'DD-MON-YYYY'), 'YES');
INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
     VALUES     (0000651005, 'BASE',    TO_DATE ('13-NOV-2009', 'DD-MON-YYYY'), 'NO');
INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
     VALUES     (0010973671, 'BASE',    TO_DATE ('20-MAR-2008', 'DD-MON-YYYY'), 'YES');
INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
     VALUES     (0010973671, 'BASE',    TO_DATE ('18-JUN-2010', 'DD-MON-YYYY'), 'NO');
COMMIT;Also, post the results that you want from that data. I'm not certain, but I think you want these results:
`    SA_ID LAST_EFFD
    651005 13-NOV-09
  10973671 18-JUN-10That is, the latest effdt for each distinct sa_id.
Here's how to get those results:
SELECT    sa_id
,         MAX (effdt)    AS last_effdt
FROM      table_x
GROUP BY  sa_id
;

Similar Messages

  • Deleting data from another table with multiple conditions

    Hi frnds
    I need to delete some data from a table based on multiple condition I tried following sql but its deleteing some rows which is not meeting the criteria which is really dangerours. When i trying = operator it returns ORa- 01427 single -row subquery returns more than one row
    delete from GL_TXNS
    where TRN_DT in (Select trn_Dt from GL_MAT)
    and BR in (select ac_branch from GL_MAT)
    and CODE in (select CODE T from GL_MAT)
    and (lcy_amt in (select lcy_amt from GL_MAT) or
    fcy_amt in(select fcy_amt from GL_MAT)
    rgds
    ramya

    My answer is the same as Avinash's but I will explain a little bit more.
    ORa- 01427 single -row subquery returns more than one rowmeans that you have a subquery that Oracle is expecting one value from that is returning multiple values. In your case you need one value for the equijoin ("=") and you are getting more than one value back. The error happens even if all the values are the same - multiple values being returned will cause the error.
    The solution is to either allow multiple values to be returned (say, use the IN condition istead of "=") or only return one value if possible (say, forcing one value by using DISTINCT, GROUP BY, or a WHERE clause condition of ROWNUM=1) - but these workarounds must be checked carefully to make sure they work correctkly

  • Load data from a file with multiple record types to a single table-sqlldr

    We are using two datastores which refer to the same file. The file has 2 types of records header and detail.
    h011234tyre
    d01rey5679jkj5679
    h011235tyrr
    d01rel5678jul5688
    d01reh5698jll5638
    Can someone help in loading these lines from one file with only two data stores(not 2 separate files) using File to Oracle(SQLLDR) Knowledge Module.

    Hi,
    Unfortunately the IKM SQLDR doesn't have the "when" condition to be wrote at ctl file.
    If you wish a simple solution, just add an option (drop me a email if you want a LKM with this)
    The point is:
    With a single option, you will control the when ctl clause and, for instance, can define:
    1) create 2 datastores (1 for each file)
    2) the first position will be a column at each datastore
    3) write the when condition to this first column at the LKM in the interface.
    Does it help you?

  • To Select the data from two table one is transp table and onther is cluster

    Hi All,
    I want to select the data from two tables
    Here i am giving with an example.
    Fileds: kunnr belnr from bseg.  table bseg
    fields: adrnr from kna1     table: kna1.
    Know i want to put these into one internal table based on kunnr and belnr.
    Thanks in advance.
    Ramesh

    Hi,
       U cant use joins on cluster table and BSEG is a cluster table so use FOR  ALL ENTRIES for taht
    refer this code
    *&      Form  sub_read_bsak
          text
    -->  p1        text
    <--  p2        text
    FORM sub_read_bsak.
    *--Select data from BSAK Table
      SELECT lifnr
             augdt
             augbl
             gjahr
             belnr
             xblnr
             blart
             dmbtr
             mwskz
             mwsts
             sgtxt
             FROM bsak
             INTO CORRESPONDING FIELDS OF TABLE it_bsak
             WHERE belnr IN s_belnr
             AND   augdt IN s_augdt.
      IF sy-subrc EQ 0.
    *--Sort table by accounting document and vendor number
        SORT it_bsak BY belnr lifnr.
      ENDIF.
    ENDFORM.                    " sub_read_bsak
    *&      Form  sub_read_bseg
          text
    -->  p1        text
    <--  p2        text
    FORM sub_read_bseg.
      IF NOT it_bsak[] IS INITIAL.
    *--Select data from BSEG table
        SELECT belnr
               gjahr
               shkzg
               kostl
               hkont
               ebeln
               ebelp
               FROM bseg
               INTO CORRESPONDING FIELDS OF TABLE it_bseg
               FOR ALL ENTRIES IN it_bsak
               WHERE belnr EQ it_bsak-belnr
               AND   gjahr EQ it_bsak-gjahr
               AND   shkzg EQ 'S'.
        IF sy-subrc EQ 0.
    *--Sort table by accounting document
          SORT it_bseg BY belnr.
        ENDIF.
      ENDIF.
    ENDFORM.                    " sub_read_bseg

  • Reading Data from a Table With an Expert

    Hi all - I'm trying to write an Expert in OWB that would read data from a table and create a new table based on that information. I have the table creation part down, but how can I read data from a table with an Expert? From what I've read, Experts only deal with metadata, and there is no mechanism in Experts to actually read data in the tables. Does OraTcl work in Experts? Has anyone ever came across this problem, and if so, how did you solve it? Thanks in advance for your time! - Don

    Hi Don
    Can also use Java and JDBC from within Tcl, see the routines in this file below, take modify or whatever;
    http://blogs.oracle.com/warehousebuilder/ombora.tcl
    source <dir>\ombora.tcl
    set g_user scott
    set g_upwd zzzzzz
    set g_hostname localhost
    set g_port 1521
    set g_srvname ora111
    oraconnect $g_user $g_upwd $g_hostname:$g_port:$g_srvname
    oraselect "select * from emp" ""
    As well as dumping the results to stdout (you probably want to comment that part out and change any other part you don't like), g_res is a ResultSet object you can use to do whatever.
    Cheers
    David

  • Cartesian of data from two tables with no matching columns

    Hello,
    I was wondering – what’s the best way to create a Cartesian of data from two tables with no matching columns in such a way, so that there will be only a single SQL query generated?
    I am thinking about something like:
    for $COUNTRY in ns0: COUNTRY ()
    for $PROD in ns1:PROD()
    return <Results>
         <COUNTRY> {fn:data($COUNTRY/COUNTRY_NAME)} </COUNTRY>
         <PROD> {fn:data($PROD/PROD_NAME)} </PROD>
    </Results>
    And the expected result is combination of all COUNTRY_NAMEs with all PROD_NAMEs.
    What I’ve noticed when checking query plan is that DSP will execute two queries to have the results – one for COUNTRY_NAME and another one for PROD_NAME. Which in general results in not the best performance ;-)
    What I’ve noticed also is that when I add something like:
    where COUNTRY_NAME != PROD_NAME
    everything is ok and there is only one query created (it's red in the Query plan, but still it's ok from my pov). Still it looks to me more like a workaround, not a real best approach. I may be wrong though...
    So the question is – what’s the suggested approach for such queries?
    Thanks,
    Leszek
    Edited by xnts at 11/19/2007 10:54 AM

    Which in general results in not the best performanceI disagree. Only for two tables with very few rows, would a single sql statement give better performance.
    Suppose there are 10,000 rows in each table - the cross-product will result in 100 million rows. Sounds like a bad idea. For this reason, DSP will not push a cross-product to a database. It will get the rows from each table in separate sql statements (retrieving only 20,000 rows) and then produce the cross-product itself.
    If you want to execute sql with cross-products, you can create a sql-statement based dataservice. I recommend against doing so.

  • Select max date from a group of columns of type date

    hi ,
    my requirement is i have 5 columns
    ID, date1,date2,date3,date4.
    first column is integer type and is primary key and remaining all columns are type DATE
    the values in the table are for example:
    ID date1 date2 date3 date4
    100 8/19/2007 4:29:44 PM 8/18/2007 3:50:34 PM 8/16/2007 3:20:55 PM 8/20/2007 5:19:24 PM
    101 8/21/2007 5:29:44 PM 8/19/2007 4:20:34 PM 8/19/2007 4:29:44 PM 8/18/2007 2:50:45 PM
    the output required is
    ID closeddate
    100 8/20/2007 5:19:24 PM
    101 8/21/2007 5:29:44 PM
    i want the max of all dates in all date columns in one single column for each row
    the query which i tried is
    select ID,max(closed_date)
    from t1,TABLE(Date(date1,date2,date3,date4)) closeddate
    but its not working
    can any body help me out in solving this
    thanks in advance

    so use the NVL on each column. and set it to
    something ridiculous, so it will always be less than
    the other values. and then, if the ridiculous value
    shows up, it means all the columns were null for that
    row.Rather than relying on some rediculously small date use coalecse on each date column with the date column of intereste listed first. The order of the remaining terms in the coalesce statement is unimportant. e.g. on a table with three date columsn:
    with x as (
      select 1 id, sysdate dte1, sysdate-1 dte2, sysdate+1 dte3 from dual union
      select 2, sysdate, null, null from dual union
      select 3, null, sysdate-1, null from dual union
      select 4, null, sysdate-1, sysdate+1 from dual union
      select 5, null, null, null from dual
    select id,
           greatest( coalesce(dte1,dte2,dte3),
                     coalesce(dte2,dte3,dte1),
                     coalesce(dte3,dte1,dte2)) Max_Date
    from x
    ID                     MAX_DATE                 
    1                      09-OCT-2007 12.15.44     
    2                      08-OCT-2007 12.15.44     
    3                      07-OCT-2007 12.15.44     
    4                      09-OCT-2007 12.15.44     
    5                      (null)
    5 rows selecteThis way if they are all null, you get a null result, and don't need to handle your null value date as a special case. It also avoids the situation where the unlikely event occurs where your special null value date appears as actual data.

  • Reading data from BSEG table with Non-key fields in where clause

    Hi All,
        I have to read data from BSEG table based on WBS element field (PROJK). As I'm not passing key fields to WHERE clause system couldnt run the select statement. Since BSEG is a cluster table I cant even create secondary index on PROJK field.
       Could you please tell me, how to improve its performance.
    Regards
    Jaker.

    SELECT bukrs
                 belnr
                 gjahr
                 shkzg
                 dmbtr
                 hkont
                 lifnr
                 matnr
                 werks
                 menge
                 meins
                 ebeln
            FROM bseg
            INTO TABLE it_bseg
            PACKAGE SIZE 10
            FOR ALL ENTRIES IN it_final
            WHERE  bukrs EQ it_final-bukrs
               AND belnr EQ it_final-belnr
               AND gjahr EQ it_final-gjahr
               AND buzei EQ it_final-buzei
               AND hkont EQ it_final-hkont
               AND werks IN s_werks.
    By using package and fetch from BSEG table. gathering all other information to a final internal table.This will reduce the hit to database.And also try to put that data in hashed internal table which is it_bseg....then definetly improve the performance.
    <REMOVED BY MODERATOR>
    Dara.
    Edited by: Alvaro Tejada Galindo on Apr 21, 2008 12:47 PM

  • Selecting data from single table with different condition in single query

    Hi everybody...
    I have one table with col1, col2, col3, col4, col5... as columns.
    I want to select col1, col2, col3 with condition (x=y and a=b and c=d)
    I want to select col4, col5 with condition (x=y and a=b and m=n )
    in single query...
    Thanx for ur help

    Given this data set...
    SQL> select * from oddity
      2  /
          COL1       COL2       COL3       COL4       COL5 A X C M
             1          2          3          4          5 B Y   M
             1          2          3          4          5 A Y C N
             1          2          3          4          5 A Y D M
             1          2          3          4          5 A Y D N
             1          2          3          4          5 B Y D N
             1          2          3          4          5 B Y D U
    6 rows selected.
    SQL>The following query meets the requirements. Of course, the requirements as stated are incomplete. I ahave assumed that we select all five columns if C=D andM=N.
    SQL> SELECT decode(c, 'D', col1, '0') AS col1
      2         , decode(c, 'D', col2, '0') AS col2
      3         , decode(c, 'D', col3, '0') AS col3
      4          , decode(m, 'N', col4, '-8') AS col4
      5           , decode(m, 'N', col5, '-8') AS col5
      6  FROM oddity
      7  WHERE a = 'B'
      8  AND  x = 'Y'
      9  /
          COL1       COL2       COL3       COL4       COL5
             0          0          0         -8         -8
             1          2          3          4          5
             1          2          3         -8         -8
    SQL> Cheers, APC

  • Select data from database tables with high performance

    hi all,
    how to select data from different database tables with high performance.
    im using for all entries instead of inner joins, even though burden on data base tables is going very high ( 90 % in se30)
    hw to increase the performance.
    kindly, reply.
    thnks

    Also Check you are not using open sql much like distict order by group by , use abap techniques on internal table to acive the same.
    also Dont use select endselect.
    if possible use up to n rows claus....
    taht will limit the data base hits.
    also dont run select in siode any loops.
    i guess these are some of the trics oyu can use to avoid frequent DATA BASE HITS AND ABVOID THE DATA BASE LAOD.

  • Issue in retrieving all the records from ADF Table with multiple row

    Hi,
    As per my requirement, I need to fill the table with multi selected LOV values and when user clicks on commit, I need to save them to database.
    I am using ADF 11g, Multi select table. Using the below ADD method, I am able to add the records but if user clicks on cancel, I need to remove those from view and clear the table as well.
    But the Issue I am facing is, in my cancel method, always I am getting half of the records. Lets assume table contains 100 records but in my cancel method, I am getting only 50 records.
    Please let me know what is the issue in my source code.
    ADD Method:
    public void insertRecInCMProcessParamVal(String commType, String processType, Number seqNumber){       
    try{
    Row row = this.getCmProcessParamValueView1().createRow();
    row.setAttribute("ParamValue7", commType);
    row.setAttribute("ProcessType", processType);
    row.setAttribute("CreationDate", new Date());
    row.setAttribute("CreatedBy", uid);
    row.setAttribute("ParamValueSeqNum", seqNumber);
    row.setAttribute("ProcessedFlag", "N");
    this.getCmProcessParamValueView1().insertRow(row);
    }catch(Exception e){           
    e.printStackTrace();
    Table Code:
    <af:table value="#{bindings.CmProcessParamValueView11.collectionModel}"
    var="row"
    rows="#{bindings.CmProcessParamValueView11.rangeSize}"
    emptyText="#{bindings.CmProcessParamValueView11.viewable ? 'No data to display.' : 'Access Denied.'}"
    fetchSize="#{bindings.CmProcessParamValueView11.rangeSize}"
    rowBandingInterval="1"
    selectedRowKeys="#{bindings.CmProcessParamValueView11.collectionModel.selectedRow}"
    selectionListener="#{bindings.CmProcessParamValueView11.collectionModel.makeCurrent}"
    rowSelection="multiple"
    binding="#{backingBeanScope.backing_app_RunCalcPage.t1}"
    id="t1" width="100%" inlineStyle="height:100px;" >
    <af:column sortProperty="ParamValue6"
    sortable="true"
    headerText="#{bindings.CmProcessParamValueView11.hints.ParamValue6.label}"
    id="c1" visible="false">
    <af:inputText value="#{row.bindings.ParamValue6.inputValue}"
    label="#{bindings.CmProcessParamValueView11.hints.ParamValue6.label}"
    required="#{bindings.CmProcessParamValueView11.hints.ParamValue6.mandatory}"
    columns="#{bindings.CmProcessParamValueView11.hints.ParamValue6.displayWidth}"
    maximumLength="#{bindings.CmProcessParamValueView11.hints.ParamValue6.precision}"
    shortDesc="#{bindings.CmProcessParamValueView11.hints.ParamValue6.tooltip}"
    id="it3">
    <f:validator binding="#{row.bindings.ParamValue6.validator}"/>
    </af:inputText>
    </af:column>
    <af:column sortProperty="ParamValue7"
    sortable="true"
    headerText="Comm Type"
    id="c2">
    <af:inputText value="#{row.bindings.ParamValue7.inputValue}"
    label="#{bindings.CmProcessParamValueView11.hints.ParamValue7.label}"
    required="#{bindings.CmProcessParamValueView11.hints.ParamValue7.mandatory}"
    columns="#{bindings.CmProcessParamValueView11.hints.ParamValue7.displayWidth}"
    maximumLength="#{bindings.CmProcessParamValueView11.hints.ParamValue7.precision}"
    shortDesc="#{bindings.CmProcessParamValueView11.hints.ParamValue7.tooltip}"
    id="it4">
    <f:validator binding="#{row.bindings.ParamValue7.validator}"/>
    </af:inputText>
    </af:column>
    <af:column sortProperty="ParamValue8"
    sortable="true"
    headerText="#{bindings.CmProcessParamValueView11.hints.ParamValue8.label}"
    id="c3" visible="false">
    <af:inputText value="#{row.bindings.ParamValue8.inputValue}"
    label="#{bindings.CmProcessParamValueView11.hints.ParamValue8.label}"
    required="#{bindings.CmProcessParamValueView11.hints.ParamValue8.mandatory}"
    columns="#{bindings.CmProcessParamValueView11.hints.ParamValue8.displayWidth}"
    maximumLength="#{bindings.CmProcessParamValueView11.hints.ParamValue8.precision}"
    shortDesc="#{bindings.CmProcessParamValueView11.hints.ParamValue8.tooltip}"
    id="it2">
    <f:validator binding="#{row.bindings.ParamValue8.validator}"/>
    </af:inputText>
    </af:column>
    </af:table>
    Backing Bean Code:
    DCBindingContainer dcBindings=(DCBindingContainer)getBindings();
    DCIteratorBinding dcIterator=dcBindings.findIteratorBinding("CmProcessParamValueView1Iterator");
    RowSetIterator rs = dcIterator.getRowSetIterator();
    System.out.println("In Cancel Row Count is : "+ rs.getRowCount());
    if (rs.getRowCount() > 0) {
    Row row = rs.first();
    row.refresh(Row.REFRESH_UNDO_CHANGES);
    row.remove();
    while (rs.hasNext()) {
    int count = rs.getRowCount();
    System.out.println("Count is : "+ count);
    Row row = rs.next();
    System.out.println("Row === "+ row);
    if(row != null){                   
    row.refresh(Row.REFRESH_UNDO_CHANGES);
    row.remove();
    Thanks.

    Issue resolved.
    remove selectionListener and selectedRowKeys....
    code to get all the selectedRows.
    RowSetIterator rs = dcIterator.getRowSetIterator();
    RowKeySet rks = this.t1.getSelectedRowKeys();
    Iterator rksIter = rks.iterator();
    while (rksIter.hasNext()) {
    List l = (List) rksIter.next();
    Key key = (Key)l.get(0);
    Row row = rs.getRow(key);
    Thanks.

  • Value Set whose Data come from customize table with distinct record

    Dear All,
    I am new in Oracle EBS, currently i am creating value set whose data come from customize table which have 40 duplicate record in which distinct column return 27 record .
    Table XX_ROUND_SET
    Columns (Transactions_id,set_record)
    Total Record (40)
    Distinct Record (Set_Record --> 27)
    I just want to show only 27 record in it.
    Thanks
    Rehan

    Hi Rehan,
    PL.IGNORE MY EARLIER UPDATE AND TREAT THIS UPDATE AS YOUR SOLUTION.
    Method 1
    Create the VIEW based on DISTINCT values; use the VIEW for creates the VALUESET.
    Method 2
    Paste the QUERY in TABLE field with alias name, and give the column name (with alias name).
    (in your case )
    TABLE NAME : ( select distinct transactions_id, set_record from XX_ROUND_SET ) Y
    VALUE : Y.transactions_id
    HTH
    Sanjay

  • Error for datawindow get data from temp table with SP

    I create a SP to produce results. In this SP, results recorded in a temp table #mytemp. The final sql in this sp is like:
    select * from #mytemp
    Then I create datawindow to display data from this sp and got following error:
    Number 277 Select Error: there was a transactin active when exiting the stored procedure 'myproc'. The temporary table '#mytemp' was dropped in this transaction either explicitly or implicitly. This transaction has been aborted to prevent database corruption.
    How to resolve this problem?

    Sorry but could not reproduce the problem with the following products:
    PB 12.5.2 build 5652
    ASE client 15.7 EBF22688
    ASE server 15.7 EBF 19496
    Tested with ASE and SYC database profiles
    // Profile ASE
    SQLCA.DBMS = "ASE Adaptive Server Enterprise"
    SQLCA.Database = "repro"
    SQLCA.LogPass = <**********>
    SQLCA.ServerName = ""
    SQLCA.LogId = "sa"
    SQLCA.AutoCommit = False
    SQLCA.DBParm = ""
    // Profile SYC
    SQLCA.DBMS = "SYC Adaptive Server Enterprise"
    SQLCA.Database = "repro"
    SQLCA.LogPass = <**********>
    SQLCA.ServerName = "xxxxxxxx"
    SQLCA.LogId = "sa"
    SQLCA.AutoCommit = False
    SQLCA.DBParm = "Release='15'"
    Below is the source of my stored procedure.
    Prior to run the script, execute a data pipeline to create ASA tables (EAS Demo DB V12.5) department and employees into your ASE server
    CREATE PROCEDURE mySP
    AS
        BEGIN
           create table #employee_working (
             emp_id  int not null,
             manager_id  int null,
             emp_fname   varchar(20)    not null,
             emp_lname   varchar(20)    not null,
             dept_id int not null,
             dept_name varchar(40) not null )
          insert into #employee_working
             select employee.emp_id, employee.manager_id, employee.emp_fname, employee.emp_lname, employee.dept_id, department.dept_name
             from employee , department
             where employee.dept_id = department.dept_id
        select #employee_working.emp_id, #employee_working.manager_id, #employee_working.emp_fname, #employee_working.emp_lname,
        #employee_working.dept_id, #employee_working.dept_name
            from #employee_working
        END
    Jacob

  • Unloading Data from a Table with 43000 Rows

    Hi all,
    I am moving APEX to staging and I need to unload all the tables in development. The problem I had a hard time unloading the tables with large amount of data. I tried downloading the csv files through Object Browser but it screws up the format.
    Does anyone know how to solve this problem?
    Thanks very much!

    Open up SQL Developer or TOAD, have it build insert statements from your existing data in tables...
    In SQL Developer, right click on a table, select export data, select insert and follow through..
    Thank you,
    Tony Miller
    Webster, TX

  • Populating table with data from another table with fewer columns

    Hi,
    I have 2 tables:
    Table 1:
    Column 1
    Column 2
    Column 3
    Table 2:
    Column 1
    Column 2
    I want to populate Table 1 with all the data from Table 2, and populate Column 3 of Table 1 with numbers from a sequence. Is there a SQL stmt to do this? If not, what is the best way to do it in PL/SQL?
    Thank you
    Shailan

    CREATE SEQUENCE t1_seq
    START WITH 1
    INCREMENT BY 1
    CACHE 100;
    INSERT INTO t1( col1, col2, col3 )
      SELECT col1, col2, t2_seq.nextval
        FROM t2Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

Maybe you are looking for

  • Photo Booth not working, Please help

    I have a Macbook Air, and when i try to take a picture on photo booth it says "Photo Booth couldn't save your photos, Photo Booth encountered an error when trying to save your photos. Your photos cannot be saved at this time. I went to finder clicked

  • How to convert from ifs date format to sql based dates

    I want to write a sql query using dates, to search an ifs database. Ifs stores dates as numbers created using the java.util.Date class. These dates are accurate to seconds. PLSQL uses a different format. What I need is a PLSQL function that converts

  • Compressor - Dolby Digital 5.1 AC3 encoding problem

    I'm completely stuck - I'm creating AC3 files in compressor using the "add surround sound feature" from 6 DISCREET mono AIFF files. I know they are discreet because I've listened to them in STP. When I create the AC3 files, however, they playback (on

  • Boot camp, windows xp and macbook pro-install problem

    MacBook Pro 1,1 Mac OS X 10.6.8 Boot Camp Windows XP I installed a SSD and want to put Windows XP back on the computer. I had Windows XP running under Boot Camp on my old HDD. During the Windows XP install when it tells you to press ENTER to install

  • Target Load Ordering

    Hi, if target load ordering is not used (because we do not have the enterprise etl license), what happens if I load data from table A into table B and then from the same table operator B into table C? Can I be sure that table B is always loaded first