Need advice on pagination query

Hello,
I have the following example of a pagination query, but I heard that there might be some drawbacks to this query. Could anyone please let me know any problems that might happen with this? The user will provide inputs for p and q. Thanks.
Select *
From (select rownum as n, * from table order by x, y, z)
Where n between p and q

Hi,
arizona9952 wrote:
Hello,
I have the following example of a pagination query, but I heard that there might be some drawbacks to this query. Could anyone please let me know any problems that might happen with this? The user will provide inputs for p and q. Thanks.
Select *
From (select rownum as n, * from table order by x, y, z)
Where n between p and qIf there is anything in the SELECT clause along with *, then * has to be qualified by a table name or alias. (See the example below.)
ORDER BY is applied after ROWNUM is assigned. For example, the qub-query might find x values 5, 8 and 3, and assign ROWNUM in that order, so you'd have
x   ROWNUM
5   1
8   2
3   3so if you were looking for the 2nd highest, you would get 8.
I would use the analytic ROW_NUMBER function instead of ROWNUM.
WITH  got_r_num      AS
     SELECT  x.*     -- not just *
     ,     ROW_NUMBER () OVER (ORDER BY x, y, z)  AS r_num
     FROM     table_x
SELECT       *
FROM       got_r_num
WHERE       r_num     BETWEEN  :p
            AND      :Q
ORDER BY  r_num
;To use ROWNUM, you would need 2 sub-queries. In the first, you would use ORDER BY, in the second, apply ROWNUM, and finally, in the main query, pick the values you wanted.

Similar Messages

  • Need advice on a query

    I've received this kind of thing as a category structure from a client:
    Cheese/Cheddar/Slices of Cheddar/Cheddar slices for beef burgars
    And from this I need to break it down into categories based on the /'s used. So, 'Cheese' is the top category, 'Cheddar is the second, 'Slices of Cheddar' is the third blah blah blah. I have a query that allows me to select up to the first / giving me just Cheese, or I can get it to select up to the second slash which gives me Cheese/Cheddar and so on. What I want to do is select only the text between the first and second /, or between the second and third /
    I have this so far, but I am not sure what to do with it to remove text before previous slashes:
    $query_SubCategoryList = "SELECT SUBSTRING_INDEX(ProductCategories, '/', 1) AS CategoryName FROM ps4_products WHERE SUBSTRING_INDEX(ProductCategories, '/', 1) <> '' AND ProductLive = 1 GROUP BY CategoryName ORDER BY CategoryName";
    I don't even know what it would be termed as so I know what to search for on the web!
    Any advice is going to be appreciated.
    Thanks
    Mat.

    I've received this kind of thing as a category structure from a client:
    Cheese/Cheddar/Slices of Cheddar/Cheddar slices for beef burgars
    And from this I need to break it down into categories based on the /'s used. So, 'Cheese' is the top category, 'Cheddar is the second, 'Slices of Cheddar' is the third blah blah blah. I have a query that allows me to select up to the first / giving me just Cheese, or I can get it to select up to the second slash which gives me Cheese/Cheddar and so on. What I want to do is select only the text between the first and second /, or between the second and third /
    I have this so far, but I am not sure what to do with it to remove text before previous slashes:
    $query_SubCategoryList = "SELECT SUBSTRING_INDEX(ProductCategories, '/', 1) AS CategoryName FROM ps4_products WHERE SUBSTRING_INDEX(ProductCategories, '/', 1) <> '' AND ProductLive = 1 GROUP BY CategoryName ORDER BY CategoryName";
    I don't even know what it would be termed as so I know what to search for on the web!
    Any advice is going to be appreciated.
    Thanks
    Mat.

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Help needed to optimize the query

    Help needed to optimize the query:
    The requirement is to select the record with max eff_date from HIST_TBL and that max eff_date should be > = '01-Jan-2007'.
    This is having high cost and taking around 15mins to execute.
    Can anyone help to fine-tune this??
       SELECT c.H_SEC,
                    c.S_PAID,
                    c.H_PAID,
                    table_c.EFF_DATE
       FROM    MTCH_TBL c
                    LEFT OUTER JOIN
                       (SELECT b.SEC_ALIAS,
                               b.EFF_DATE,
                               b.INSTANCE
                          FROM HIST_TBL b
                         WHERE b.EFF_DATE =
                                  (SELECT MAX (b2.EFF_DATE)
                                     FROM HIST_TBL b2
                                    WHERE b.SEC_ALIAS = b2.SEC_ALIAS
                                          AND b.INSTANCE =
                                                 b2.INSTANCE
                                          AND b2.EFF_DATE >= '01-Jan-2007')
                               OR b.EFF_DATE IS NULL) table_c
                    ON  table_c.SEC_ALIAS=c.H_SEC
                       AND table_c.INSTANCE = 100;

    To start with, I would avoid scanning HIST_TBL twice.
    Try this
    select c.h_sec
         , c.s_paid
         , c.h_paid
         , table_c.eff_date
      from mtch_tbl c
      left
      join (
              select sec_alias
                   , eff_date
                   , instance
                from (
                        select sec_alias
                             , eff_date
                             , instance
                             , max(eff_date) over(partition by sec_alias, instance) max_eff_date
                          from hist_tbl b
                         where eff_date >= to_date('01-jan-2007', 'dd-mon-yyyy')
                            or eff_date is null
               where eff_date = max_eff_date
                  or eff_date is null
           ) table_c
        on table_c.sec_alias = c.h_sec
       and table_c.instance  = 100;

  • Need advice about certification: do J2SE 1.4 or wait for 1.5 to go out?

    I need advice here! I am studing for Java Programmer certification (310-035) and I know now that the certification does not have any expiration date, instead it's version based. So, if I get now a J2SE 1.4 certification, soon it will be outdated... I guess!
    Does anyone know or have any ideia of WHEN java 1.5 sdk will be avaliable, and anyone can tell me how long it will take for a new 1.5 programmer certification be avaliable for general public?

    Do both. 1.5 is far enough away that you do not want to wait for it.
    And besides, 1.5 has enough new stuff in it that you'll want to recertify anyway.

  • Need help in MDX query

    Hi All 
    I am new to MDX language and need a MDX functions/query on the cube to get the required output, Given below is the scenario with the data. 
    I am maintaining the data in a table in dataMart with given structure. We have the data at day and weekly in a single table with granularity indicator and count is the measure group column. While loading the data in to mart table we are populaiting the week
    Key from week table and Month key from month table and joining in the cube.
    we need to calculate the inventory for a particular month. If a user selects a particular month the output would be count = 30 as  a measure called Closed and count = 16 as a measure value called Open.
    Need a MDX query to get output.
    Granularity  Count WeekKey MonthKey
    Weekly 16
    W1 M1
    Weekly 17
    W1 M1
    Weekly 18
    w1 M1
    Weekly 19
    W1 M1
    Weekly 20
    W1 M1
    Weekly 21
    W1 M1
    Weekly 22
    W1 M1
    Weekly 23
    w2 M1
    Weekly 24
    w2 M1
    Weekly 25
    w2 M1
    Weekly 26
    w2 M1
    Weekly 27
    w2 M1
    Weekly 28
    w2 M1
    Weekly 29
    w2 M1
    Weekly 30
    w2 M1
    Weekly 16
    w3 M1
    Weekly 17
    w3 M1
    Weekly 18
    w3 M1
    Weekly 19
    w3 M1
    Weekly 20
    w3 M1
    Weekly 21
    w3 M1
    Weekly 22
    w3 M1
    Weekly 23
    w4 M1
    Weekly 24
    w4 M1
    Weekly 25
    w4 M1
    Weekly 26
    w4 M1
    Weekly 27
    w4 M1
    Weekly 28
    w4 M1
    Weekly 29
    w4 M1
    Weekly 30
    w4 M1
    Thanks in advance

    Hi Venkatesh,
    According to your description, you need to count the members with conditions in a particular month, right?
    In MDX, we can achieve the requirement by using Count and Filter function, I have tested it on AdventureWorks cube, the sample query below is for you reference.
    with member [ConditionalCount]
    as
    count(filter([Date].[Calendar].[Month].&[2008]&[2].children,[Measures].[Internet Order Count]>50))
    select {[Measures].[Internet Order Count],[ConditionalCount]} on 0,
    [Date].[Calendar].[Date].members on 1
    from
    (select [Date].[Calendar].[Month].&[2008]&[2] on 0 from
    [Adventure Works]
    Reference
    http://msdn.microsoft.com/en-us/library/ms144823.aspx
    http://msdn.microsoft.com/en-us/library/ms146037.aspx
    If this is not what you want, please elaborate your requirement, such as the detail structure of your cube, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Need advice on on a Mac Pro 1,1 Media Center

    I currently have a 2009 Mac Mini running as my home media center, but I recently came by a FREE Mac Pro 1,1 and have decided to repurpose it as my media center so I can migrate my Mac Mini to my bedroom TV where it will live an easy life doing nothing but run Plex Home Theater, Netflix, and EyeTV. This machine falling into my lap was also quite timely because my 4-bay Drobo is running low on available expansion and another Drobo isn't in the budget at the moment.
    This vintage mac pro is running Lion 10.7.5, has 1 old and crusty 500GB hardrive, dual x5160 processors, 4GB RAM (one stick i'm pretty sure is toast judging by the red light and the kernel panics), and the standard NVIDIA GeForce 7300 GT 256MB graphics card. It will be used primarily for the following: network storage for iphoto and itunes libraries, streaming video, Plex Media Server & Plex Home Theater, and Handbrake encoding. I also have a goal of safety of data for my movies, photos and music as this machine will supplement my current Drobo storage.
    My plans are for a 128GB SSD boot drive installed in one of the PCIe slots and then to load up all 4 of the 3.5" drive bays with WD Green hard drives. I have also ordered 4GB of replacement RAM, so upon removal of the faulty unit I will have 7GB.
    Here is where I need advice because I am not very familiar with RAID and the differences between hardware or software raid. Am I better off getting four drives of the same size and setting them up as RAID 5 (I think) using Apple's RAID utility or should I throw in three 1TB drives and then install a fourth 3TB or 4TB drive as a Time Machine backup for the other three?
    Should I upgrade the OSX to the technically unsupported latest version? Or is it not worth the trouble for this application?
    Also, is there any benefit to upgrading the graphics card to the ATI Radeon 5770? Would this yield an improved image quality? I am outputting to a Denon AV Reciever and subsequently to a 100" projection screen, if that makes a difference. I also noticed the 5770 has an HDMI port, wich would be nice, but not necessary since I can use a DVI converter and woud still need to use the optical audio out anyway.
    Much obliged for any input

    My plans are for a 128GB SSD boot drive installed in one of the PCIe slots and then to load up all 4 of the 3.5" drive bays with WD Green hard drives. I have also ordered 4GB of replacement RAM, so upon removal of the faulty unit I will have 7GB.
    PCIe cards that use or support SSD are not bootable until you get to 2008 (and that is limited too).
    Green are not suited for any form of array unless say NAS and WD RED.  Better option would be 3 x 2TB WD Blacks in a mirror, and too many people only use two drives, well 3 is much easier safer and works better. Might want to invest in www.SoftRAID.com product even.
    Best price and quality, got my 1,1 with 8 x 2GB (ideal is 4 or 8 DIMMs)
    2x2GB FBDIMM DDR2 667MHz @ $25
    http://www.amazon.com/BUFFERED-PC2-5300-FB-DIMM-APPLE-Memory/dp/B002ORUUAC/
    With price of 250GB SSD $155 I'd go with that or stick with $89 for 128GB .

  • I need to pass a query in form of string to DBMS_XMLQUERY.GETXML package...the parameters to the query are date and varchar ..please help me..

    I need to pass a query in form of string to DBMS_XMLQUERY.GETXML package...the parameters to the query are date and varchar ..please help me build the string .Below is the query and the out put. ( the string is building fine except the parameters are with out quotes)
    here is the procedure
    create or replace
    procedure temp(
        P_MTR_ID VARCHAR2,
        P_FROM_DATE    IN DATE ,
        P_THROUGH_DATE IN DATE ) AS
        L_XML CLOB;
        l_query VARCHAR2(2000);
    BEGIN
    l_query:=  'SELECT
        a.s_datetime DATETIME,
        a.downdate Ending_date,
        a.downtime Ending_time,
        TO_CHAR(ROUND(a.downusage,3),''9999999.000'') kWh_Usage,
        TO_CHAR(ROUND(a.downcost,2),''$9,999,999.00'') kWh_cost,
        TO_CHAR(ROUND(B.DOWNUSAGE,3),''9999999.000'') KVARH
      FROM
        (SELECT s_datetime + .000011574 s_datetime,
          TO_CHAR(S_DATETIME ,''mm/dd/yyyy'') DOWNDATE,
          DECODE(TO_CHAR(s_datetime+.000011574 ,''hh24:'
          ||'mi''), ''00:'
          ||'00'',''24:'
          ||'00'', TO_CHAR(s_datetime+.000011574,''hh24:'
          ||'mi'')) downtime,
          s_usage downusage,
          s_cost downcost
        FROM summary_qtrhour
        WHERE s_mtrid = '
        ||P_MTR_ID||
       ' AND s_mtrch   = ''1''
        AND s_datetime BETWEEN TO_DATE('
        ||P_FROM_DATE||
        ',''DD-MON-YY'') AND (TO_DATE('
        ||P_THROUGH_DATE||
        ',''DD-MON-YY'') + 1)
        ) a,
        (SELECT s_datetime + .000011574 s_datetime,
          s_usage downusage
        FROM summary_qtrhour
        WHERE s_mtrid = '
        ||P_MTR_ID||
        ' AND s_mtrch   = ''2''
        AND s_datetime BETWEEN TO_DATE('
        ||P_FROM_DATE||
        ',''DD-MON-YY'') AND (TO_DATE('
        ||P_THROUGH_DATE||
        ','' DD-MON-YY'') + 1)
        ) B
      where a.S_DATETIME = B.S_DATETIME(+)';
    SELECT DBMS_XMLQUERY.GETXML('L_QUERY') INTO L_XML   FROM DUAL;
    INSERT INTO NK VALUES (L_XML);
    DBMS_OUTPUT.PUT_LINE('L_QUERY IS :'||L_QUERY);
    END;
    OUTPUT parameters are in bold (the issue is they are coming without single quotes otherwise th equery is fine
    L_QUERY IS :SELECT
        a.s_datetime DATETIME,
        a.downdate Ending_date,
        a.downtime Ending_time,
        TO_CHAR(ROUND(a.downusage,3),'9999999.000') kWh_Usage,
        TO_CHAR(ROUND(a.downcost,2),'$9,999,999.00') kWh_cost,
        TO_CHAR(ROUND(B.DOWNUSAGE,3),'9999999.000') KVARH
      FROM
        (SELECT s_datetime + .000011574 s_datetime,
          TO_CHAR(S_DATETIME ,'mm/dd/yyyy') DOWNDATE,
          DECODE(TO_CHAR(s_datetime+.000011574 ,'hh24:mi'), '00:00','24:00', TO_CHAR(s_datetime+.000011574,'hh24:mi')) downtime,
          s_usage downusage,
          s_cost downcost
        FROM summary_qtrhour
        WHERE s_mtrid = N3165 AND s_mtrch   = '1'
        AND s_datetime BETWEEN TO_DATE(01-JAN-13,'DD-MON-YY') AND (TO_DATE(31-JAN-13,'DD-MON-YY') + 1)
        ) a,
        (SELECT s_datetime + .000011574 s_datetime,
          s_usage downusage
        FROM summary_qtrhour
        WHERE s_mtrid = N3165 AND s_mtrch   = '2'
        AND s_datetime BETWEEN TO_DATE(01-JAN-13,'DD-MON-YY') AND (TO_DATE(31-JAN-13,' DD-MON-YY') + 1)
        ) B
      where a.S_DATETIME = B.S_DATETIME(+)

    The correct way to handle this is to use bind variables.
    And use DBMS_XMLGEN instead of DBMS_XMLQUERY :
    create or replace procedure temp (
      p_mtr_id       in varchar2
    , p_from_date    in date
    , p_through_date in date
    is
      l_xml   CLOB;
      l_query VARCHAR2(2000);
      l_ctx   dbms_xmlgen.ctxHandle;
    begin
      l_query:=  'SELECT
        a.s_datetime DATETIME,
        a.downdate Ending_date,
        a.downtime Ending_time,
        TO_CHAR(ROUND(a.downusage,3),''9999999.000'') kWh_Usage,
        TO_CHAR(ROUND(a.downcost,2),''$9,999,999.00'') kWh_cost,
        TO_CHAR(ROUND(B.DOWNUSAGE,3),''9999999.000'') KVARH
      FROM
        (SELECT s_datetime + .000011574 s_datetime,
          TO_CHAR(S_DATETIME ,''mm/dd/yyyy'') DOWNDATE,
          DECODE(TO_CHAR(s_datetime+.000011574 ,''hh24:'
          ||'mi''), ''00:'
          ||'00'',''24:'
          ||'00'', TO_CHAR(s_datetime+.000011574,''hh24:'
          ||'mi'')) downtime,
          s_usage downusage,
          s_cost downcost
        FROM summary_qtrhour
        WHERE s_mtrid = :P_MTR_ID
        AND s_mtrch   = ''1''
        AND s_datetime BETWEEN TO_DATE(:P_FROM_DATE,''DD-MON-YY'')
                           AND (TO_DATE(:P_THROUGH_DATE,''DD-MON-YY'') + 1)
        ) a,
        (SELECT s_datetime + .000011574 s_datetime,
          s_usage downusage
        FROM summary_qtrhour
        WHERE s_mtrid = :P_MTR_ID
        AND s_mtrch   = ''2''
        AND s_datetime BETWEEN TO_DATE(:P_FROM_DATE,''DD-MON-YY'')
                           AND (TO_DATE(:P_THROUGH_DATE,'' DD-MON-YY'') + 1)
        ) B
      where a.S_DATETIME = B.S_DATETIME(+)';
      l_ctx := dbms_xmlgen.newContext(l_query);
      dbms_xmlgen.setBindValue(l_ctx, 'P_MTR_ID', p_mtr_id);
      dbms_xmlgen.setBindValue(l_ctx, 'P_FROM_DATE', to_char(p_from_date, 'DD-MON-YY'));
      dbms_xmlgen.setBindValue(l_ctx, 'P_THROUGH_DATE', to_char(p_through_date, 'DD-MON-YY'));
      l_xml := dbms_xmlgen.getXML(l_ctx);
      dbms_xmlgen.closeContext(l_ctx);
      insert into nk values (l_xml);
    end;

  • Help needed in writing a query

    Hi all,
    Following is the structure of my table.
    Data
    Key Number
    Id Number
    Value varchar2(100)
    activity_name varchar2(100)
    Creation_Date Date
    Eval_Point varchar2(100)
    In the above table Id is the primary key.
    The column eval_point holds only two types of entries 'activation' or 'completion'
    The activity_name column holds the name of the activity.
    The sample entries in the table are as follows
    Key Value activity_name Creation_Date Id Eval_Point
    260002 XXX assign_1 2007-09-21 16:58:41.920000 951 activation
    260002     XXX assign_1 2007-09-21 16:58:43.392000 953     completion
    260002     XXX assign_2 2007-09-21 16:59:03.732000 956     activation
    260002     XXX assign_2 2007-09-21 16:59:04.112000 954     completion
    260002     XXX assign_3 2007-09-21 16:59:24.331000     958     activation
    260002     XXX assign_3 2007-09-21 16:59:24.421000     957     completion
    i need to write a query which gives me data in the following format
    value id start_date end_date
    XXX YYY 2007-09-21 16:58:41.920000 2007-09-21 16:58:43.392000
    where start_date is the creation date of the 'activation' and end_date is the creation_date of 'completion'.
    Can somebody help?
    -thanks
    lavanya

    hello all,
    I would like to re frame my question.
    this is the output of the base query
    select id,instance_key,sensor_target,activity_sensor,creation_date,eval_point from bpel_variable_sensor_values where instance_key=260002;
    953     260002     Assign_1     952     2007-09-21 16:58:43.392000     completion
    951     260002     Assign_1     952     2007-09-21 16:58:41.920000     activation
    956     260002     Assign_2     955     2007-09-21 16:59:03.732000     activation
    954     260002     Assign_2     955     2007-09-21 16:59:04.112000     completion
    958     260002     Assign_3     959     2007-09-21 16:59:24.331000     activation
    957     260002     Assign_3     959     2007-09-21 16:59:24.421000     completion
    962     260002     Assign_4     960     2007-09-21 16:59:44.741000     completion
    961     260002     Assign_4     960     2007-09-21 16:59:44.640000     activation
    964     260002     Assign_5     965     2007-09-21 17:00:05.290000     completion
    963     260002     Assign_5     965     2007-09-21 17:00:04.950000     activation
    I am trying out this query
    select a.instance_key,a.creation_date,b.creation_date,a.id
    from bpel_variable_sensor_values a, bpel_variable_sensor_values b
    where a.instance_key=b.instance_key
    and a.instance_key=260002
    and a.eval_point='activation'
    and b.eval_point='completion'
    and i am getting 25 entries i.e a cartesian product of a.creation_date
    260002     2007-09-21 16:58:41.920000     2007-09-21 16:58:43.392000     951
    260002     2007-09-21 16:58:41.920000     2007-09-21 16:59:04.112000     951
    260002     2007-09-21 16:58:41.920000     2007-09-21 16:59:24.421000     951
    260002     2007-09-21 16:58:41.920000     2007-09-21 16:59:44.741000     951
    260002     2007-09-21 16:58:41.920000     2007-09-21 17:00:05.290000     951
    260002     2007-09-21 16:59:03.732000     2007-09-21 16:58:43.392000     956
    260002     2007-09-21 16:59:03.732000     2007-09-21 16:59:04.112000     956
    260002     2007-09-21 16:59:03.732000     2007-09-21 16:59:24.421000     956
    260002     2007-09-21 16:59:03.732000     2007-09-21 16:59:44.741000     956
    260002     2007-09-21 16:59:03.732000     2007-09-21 17:00:05.290000     956
    260002     2007-09-21 16:59:24.331000     2007-09-21 16:58:43.392000     958
    260002     2007-09-21 16:59:24.331000     2007-09-21 16:59:04.112000     958
    260002     2007-09-21 16:59:24.331000     2007-09-21 16:59:24.421000     958
    260002     2007-09-21 16:59:24.331000     2007-09-21 16:59:44.741000     958
    260002     2007-09-21 16:59:24.331000     2007-09-21 17:00:05.290000     958
    260002     2007-09-21 16:59:44.640000     2007-09-21 16:58:43.392000     961
    260002     2007-09-21 16:59:44.640000     2007-09-21 16:59:04.112000     961
    260002     2007-09-21 16:59:44.640000     2007-09-21 16:59:24.421000     961
    260002     2007-09-21 16:59:44.640000     2007-09-21 16:59:44.741000     961
    260002     2007-09-21 16:59:44.640000     2007-09-21 17:00:05.290000     961
    260002     2007-09-21 17:00:04.950000     2007-09-21 16:58:43.392000     963
    260002     2007-09-21 17:00:04.950000     2007-09-21 16:59:04.112000     963
    260002     2007-09-21 17:00:04.950000     2007-09-21 16:59:24.421000     963
    260002     2007-09-21 17:00:04.950000     2007-09-21 16:59:44.741000     963
    260002     2007-09-21 17:00:04.950000     2007-09-21 17:00:05.290000     963
    can soembody help me to reduce these to 5 rows.

  • Need advice on video software.

    Need advice on video software.
    I currently use adobe elements 3 and have done so for a few years now with no problems. my os is XP and my system is a couple of years old, but we do have a brand new win7 machine in the house.
    I am currently look at Cyberlink PowerDirector 8 Ultra OR Adobe Premiere Elements 8. Reviews for both softwares seem very good BUT, when I dig deeper into user reviews instead of editor reviews do I find problems.
    Major problems with program crashing all over the place, at start up etc, and it is still not getting along with any win7 machine? Major problems with drivers. Honestly, I do not want to have to jump through dozens of hoops to get any software to run. After I pay for it, it should run, period.
    Has anyone else here used both softwares and can you give an honest opinion of each?
    I am also asking these same questions on the cyberlink site.
    I would like to upgrade my video software to take advantage of the new features that are coming out but I really don't want a big headache trying to run one or the other. To be fair, when I bought adobe elements 3 I had also bought pinnacle, which has gathered dust since my first week with it, which is why elements was purchased. That was money wasted and I do not wish to repeat this. I would like to go with Premiere Elements 8 but remain very unsure.

    If your newer machine is Win7 64-bit, it might be worth waiting for SP-1 to be issued, and then hope that 64-bit drivers are fully included. The 64-bit drivers now seem t be an issue at this point, and that will affect any NLE program.
    Also, and regardless of which particular program you choose, optimizing your computer, the OS, the hardware, and the resources management, will go a very long way to insuring success. Few programs can tax a computer, as an NLE can - any NLE. Video and Audio editing will stress your system, like almost nothing else can, except for heavy-duty CAD, or 3D work, though those are usually done only on specialized, optimized computers, designed just for those applications.
    Not the specific advice that you seek, but it's the best that I can do.
    Good luck,
    Hunt

  • Need help on a query

    Hi gurus,
    I need to make one query for material sales. The query should contain the comparative sales (by date) made for the material group and distribution channel. I don't have any idea how to make this.
    I tried in the follow way: I used the 2LIS_11_VAHDR and 2LIS_11_VAITM (DS for the header and the position of the sales documents. In the header I put AUDAT - date of sale command, used for the comparator). I make two cubes, each one having:
      first one:
                 - audat1 (characteristic)
                 - material group (characteristic)
                 - distribution channel (characteristic)
                 - net value1 (key)
       second one:
                 - audat2 (characteristic)
                 - material group (characteristic)
                 - distribution channel (characteristic)
                 - net value2 (key)
    After this I make a MultiProvider which contain the cubes. When I make the query using this multiprovider,
    I put on the selection audat1 and audat2, on the column I put the distribution channel, and on the row I put the material group, net value 1 and net value 2. (I need another row for the % = ( net value1 - net value2 ) / 100, but I was not able to create the field). Unfortunate, when  try to open the query using the analyzer, it give the error: "variable 0REDAY1 is used for two different characteristic".
    I need to now if it is other possibilities for this kind of query's,  or, how to solve the error.
    Tank you

    Hi Gabriel,
    Just check in query designer that are there any variables created in filters or keyfigures (restriction), or Restricted Key figures.
    If so, please check whether there is any variable on the same character.
    Like 0comp_code -->variable 1 in filters
    0comp_code-->variable 2 in RKF's.
    This is the reason you get the error in the BEx Analyzer.
    Regards,
    Ravi Kanth

  • I need advice:  I love my apple TV.  But my laptop at home, only has a 750gig hard drive.  Is there a possibility of having all my media on an external hardrive to still connect to the Apple TV?

    I need advice:  I love my apple TV.  But my laptop at home, only has a 750gig hard drive.  Is there a possibility of having all my media on an external hardrive to still connect to the Apple TV?
    Is there like in a hard drive of which iTunes can read the media, which will not require me to add and delete media onto iTunes, because of a lack of space on my laptop?

    just get an ext HD, and point your itunes library to that drive so now you'll just keep all your media on this ext

  • JMS to Synchronous Web Service Scenario (Need Advice to use BPM or Not)

    Hi Experts,
    We have a scenario where we fetch data from JMS Sender and Send it to MDM Webservices.
    We want to have the files processed in such a way that until we get a response from webservice back that it was sucessful ,then only we need to process the next file.
    We would need advice as can we use BPM for this.
    We are thinking of having BPM like this:
    RecieveStep(Asyn)-SynchronousSend Step(with wait)-Stop
    The problem with this is when processing huge files the processing time of the Queue is taking very long and sometimes we are getting SYSFAIL.
    Please would anyone advice as how can we approach this scenario either using BPM or without.
    Also Can we use multiple queues or multpile instances concept for this scenario.
    Please Advice.
    Thanks in Advance
    Regards
    J

    Hi Prateek,
    Thank you very much for your quick reply.
    The response from Webservice does not need to be sent anywhere.
    We just want that after recieving the response back from webservice(SOAP) then only we need to process the next file.
    Can we control something from Sender JMS adapter side as well as it is picking up all the files and all files wait at BPE until each one gets processed.
    Is this possible without BPM or with BPM.
    Please advice as what wud be possible steps inorder to achive it through BPM or Without BPM.
    Thanks and Regards,
    J

  • Major Issues with installing 4tb internal. Really need advice please

    In the process of a long needed upgrade to my 2010 Mac Pro Quad core 2.8 and have run into a serious headache that I really need advice on so I can get back to work. I've already spent 2 days dealing with all of this...
    Just did a new SSD install and Migration which went fine but I'm also updating the rest of the internal drives. My main (non boot) drive was being upgraded from a Seagate Barracuda 3tb to a 4th Deskstar NAS (it was showing as compatible with my desktop, see links below). My 3tb ran fine for years but now due to it being heavily used I want to switch to a new drive and I can also use a bit more space.
    The issue I'm running into is that initially on boot, my system was telling me it couldn't recognize the disk and it wasn't even showing up in Disk Utility. I had to purchase a SATA USB 3 kit to attach it as an external to get it to show which worked without problem. With the USB kit I was then able to partition it (GUID, Mac OS extended journaled) and format it properly. After reinserting the drive into my tower as an internal it failed to show again. After a few attempts of restarts and trying various bays it popped up but showed as a non formatted drive and I was told again that the system didn't recognise it. I was then given the option to initialise and was then actually able to then format and partition it though Disk Utility while it was installed as an internal.
    Figured that was problem solved but when I went to check the drive and getting ready to transfer files over I noticed that Disk Utility was only allowing the First Aid and Partition options but not Erase, RAID, Restore which I'd never seen before. I then also noticed that none of the drive connection info was in the same format nor will it even provide drive bay info, connection info or read/write status (See screen shots). This is what I can't figure out and really need to clarify before I put full trust into using this drive.
    Any info would be greatly appreciated...
    Deskstar 4tb internal info which is installed in Bay 2
    3tb Seagate which I trying to retire and transfer from.
    Here are the weblinks to the Deskstar 4tb drive and the compatibility list but I support isn't allowing me to add direct links so add the www. before hand.
    (Drive - eshop.macsales.com/item/HGST/0S03664/) (compatibility list - eshop.macsales.com/Descriptions/specs/Framework.cfm?page=macpromid2010.html).

    What OSX version?
    Disk Utility in later versions of ML and I think Mavericks have problems formatting with 4 TB internal drives.
    http://forums.macrumors.com/showthread.php?t=1661024
    http://apple.stackexchange.com/questions/112266/why-doesnt-my-mac-pro-see-my-new -4tb-sata-drive

  • I'm desperately needing advice to a common question.  I use Quicken and love it.  But the Mac version is not as great as the PC.   Has anyone installed it by segmenting their Mac with Parallels or Fusion or Boot camp.  If so, which one do you recommend.

    I'm desperately needing advice.  New Mac.   Used Quicken on my PC.  Researched all software for Financial programs and Quicken is still the most recommended.   I want to use Quicken on my Mac.  The Mac version is not highly rated so I would need to partition my Mac.   Has anyone done this for their quicken program and if so, which partitioning program did you use - Parallels, Fusion ware or Boot camp?
    Thx

    Lisa Ellies-Laye wrote:
    Thanks.  Hadn't heard of it. ?  Is there any concern installing this free program on my Mac.    Have you used it?  Apart from being free is there any other advantage of Parallels and VMfusion. ?
    Virtual Box is safe and well developed, it offers similar or identical features to the paid competition, it may be a little less polished but that's all.
    Download and try it out, nothing to lose (except time).

Maybe you are looking for