Need help to understand about cluster

Hi,
I am a SQL developer, and need a little help to understand what clusters are. I never came across to create any cluster. Can someone plz help me with simple example, what is cluster and in which situation we create cluster?
Thanks
Shantanu

>
I am a SQL developer, and need a little help to understand what clusters are. I never came across to create any cluster. Can someone plz help me with simple example, what is cluster and in which situation we create cluster?
>
A cluster is used to store data from more than table together. The data would typically share the primary key. For example the scott DEPT and EMP table data is related based on the DEPTNO value so DEPTNO could be the cluster key.
If you created a cluster and then created copies of those tables in the cluster (e.g. myDEPT and myEMP) Oracle would store the dept and emp data for the same DEPTNO value together, even in the same block.
Then when you query dept and emp data using DEPTNO it will be faster to retrieve the DEPT and EMP data for that department since it is colocated in the same blocks.
The drawbacks are that when you only want data from one of the tables (e.g. emp) Oracle has to skip over the DEPT data since some of it will be in the same blocks that the EMP data is in.
So clusters and clustered tables are most useful when you always query multiple cluster tables using the cluster key. For some other operations they can be very inefficient.
See the Admin Guide for an example using the DEPT and EMP tables
http://docs.oracle.com/cd/E11882_01/server.112/e25494/clustrs003.htm#ADMIN11747

Similar Messages

  • Need help to understand INSTALL command based on GPspec 2.1.1

    Hi Friends..
    Sorry before, i couldn't understand some commands based on GPSpec 2.1.1, i'm pretty new in this field..
    So, i need your help to help me understand about the commands..
    Firstly, i want to know what exactly the DATA that referred by INSTALL Command..
    here's outlined in the GPSpec 2.1.1 for INSTALL Command (Chapter 9)..
    CLA = '80' or '84'
    INS  = 'E6'              //INSTALL
    P1   = 'xx'               //Reference control parameter P1
    P2   = '00'               //Reference control parameter P2
    Lc   = 'xx'                //Length of data field
    Data 'xxxx…'           //Install data (and MAC if present)
    Le   = '00'What exactly the data referred?.
    I thought the data which referred can be Applets, Packages, CAP files..
    So, how to know the LC and Sequence bytes of the Applets/ Packages/ CAP files..
    i found this LINK..
    In the website i read an example of INSTALL Command..
    Here's an example mentioned in the link above..
    INSTALL FOR LOAD
      84 E6 02 00 2B 10 A0 00 00 00 18 50 00 00 00 00 00 00 52 41 44 50 00 00 0E EF 0C C6 02 00 00 C8 02 00 00 C7 02 00 00 00 2A 8B 3A 01 3C 8E FD A4 (00)
      00, 90 00  [Normal ending of the command.] Sorry, i'm still not understand about it..
    Please help me regarding this..
    Sorry, perhaps this question sounds silly..
    Thanks in advance..

    Hi,
    As a hint, you want to use INSTALL for install and make selectable. You can have a look at the APDU that GPShell sends through for the final install command (hint: it will be a part of the load command). You can also execute install_for_load, load and install_for_install in GPShell to see these commands. The GP card spec is a little confusing for the INSTALL command so tracing GPShell may help you understand it.
    Cheers,
    Shane

  • Need help in understanding the error ORA-01843: not a valid month - ECX_ACT

    Hello All,
    We need help in understanding the Transaction Monitor -> Processing Message (error "ORA-01843: not a valid month - ECX_ACTIONS.GET_CONVERTED_DATE").
    And how to enable the log for Transaction Monitor -> Processing Logfile.
    Actually we are trying to import the Purchase Order XML (OAG) into eBusiness Suite via BPEL Process Manager using the Oracle Applications Adapter. The process is working fine with expected payload until it reaches the XML Gateway Transaction Monitor, where we are getting this error.
    thanks
    muthu.

    Hello All,
    We need help in understanding the Transaction Monitor -> Processing Message (error "ORA-01843: not a valid month - ECX_ACTIONS.GET_CONVERTED_DATE").
    And how to enable the log for Transaction Monitor -> Processing Logfile.
    Actually we are trying to import the Purchase Order XML (OAG) into eBusiness Suite via BPEL Process Manager using the Oracle Applications Adapter. The process is working fine with expected payload until it reaches the XML Gateway Transaction Monitor, where we are getting this error.
    thanks
    muthu.

  • I need help in understanding the customization of Landscape in R/3.

    I need help in understanding the customization of Landscape in R/3. Setup of SAP Landscape from an SAP SD point of view. Being as SAP SD consultant what would be my role in customizing the Landscape server. Help needed. Thx

    Hi,
    In a standard SAP project implementation, the 3 standard transport procedures are:
    Development System (DEV) --> QA System (QAS) --> Production System (PRD)
    In the above structure, the Training Client (TRN) could be made from the copy of PRD (after when real-time master data has been available) or from QA system (where configuration has been tested in DEV client, and the master data is uploaded manually for training purposes)
    Sandbox (standalone): This can be refreshed with Golden Client to reflect the latest configuration performed to facilitate the development/testing purposes.
    -Development (DEV): Where all system configurations and development activities are carried out.
    -Quality Assurance (QAS): Where functional testing is carried out. The System Integration Testing (carried out by the -Development Team) and the User Acceptance Testing (carried out by XXX appointed personnel) is carried out in this server.
    -Training (TRN): End Users are trained on this server.
    -Production (PRD): After the System is commissioned all data entry and administrative functions will be carried out in this server.
    This is by far the standard landscape architecture that is adopted and practiced in most implementations.
    Hope the above helps.
    Thanks.

  • Need help in understanding why so many gets and I/O

    Hi there,
    I have a sql file somewhat similar in structure to below:
    delete from table emp;-- changed to Truncate table emp;
    delete from table dept;--changed to Truncate table dept;
    insert into emp values() select a,b,c from temp_emp,temp_dept where temp_emp.id=temp_dept.emp_id
    update emp set emp_name=(select emp_name from dept where emp.id=dept.emp_id);
    commit --only at the end
    the above file takes about 9-10 hrs to complete its operation. and
    the values from v$sql for the statement
    update emp set emp_name=(select emp_name from dept where emp.id=dept.emp_id);
    are as below:
    SHARABLE_MEM     PERSISTENT_MEM     RUNTIME_MEM     SORTS     LOADED_VERSIONS     OPEN_VERSIONS     USERS_OPENING     FETCHES     EXECUTIONS     PX_SERVERS_EXECUTIONS     END_OF_FETCH_COUNT     USERS_EXECUTING     LOADS     FIRST_LOAD_TIME     INVALIDATIONS     PARSE_CALLS     DISK_READS     DIRECT_WRITES     BUFFER_GETS     APPLICATION_WAIT_TIME     CONCURRENCY_WAIT_TIME     CLUSTER_WAIT_TIME     USER_IO_WAIT_TIME     PLSQL_EXEC_TIME     JAVA_EXEC_TIME     ROWS_PROCESSED     COMMAND_TYPE     OPTIMIZER_MODE     OPTIMIZER_COST     OPTIMIZER_ENV     OPTIMIZER_ENV_HASH_VALUE     PARSING_USER_ID     PARSING_SCHEMA_ID     PARSING_SCHEMA_NAME     KEPT_VERSIONS     ADDRESS     TYPE_CHK_HEAP     HASH_VALUE     OLD_HASH_VALUE     PLAN_HASH_VALUE     CHILD_NUMBER     SERVICE     SERVICE_HASH     MODULE     MODULE_HASH     ACTION     ACTION_HASH     SERIALIZABLE_ABORTS     OUTLINE_CATEGORY     CPU_TIME     ELAPSED_TIME     OUTLINE_SID     CHILD_ADDRESS     SQLTYPE     REMOTE     OBJECT_STATUS     LITERAL_HASH_VALUE     LAST_LOAD_TIME     IS_OBSOLETE     CHILD_LATCH     SQL_PROFILE     PROGRAM_ID     PROGRAM_LINE#     EXACT_MATCHING_SIGNATURE     FORCE_MATCHING_SIGNATURE     LAST_ACTIVE_TIME     BIND_DATA     TYPECHECK_MEM
    18965     8760     7880     0     1     0     0     0     2     0     2     0     2     2011-05-10/21:16:44     1     2     163270378     0     164295929     0     509739     0     3215857850     0     0     20142     6     ALL_ROWS     656     E289FB89A4E49800CE001000AEF9E3E2CFFA331056414155519421105555551545555558591555449665851D5511058555155511152552455580588055A1454A8E0950402000002000000000010000100050000002002080007D000000000002C06566001010000080830F400000E032330000000001404A8E09504646262040262320030020003020A000A5A000     4279923421     50     50     APPS     0     00000003CBE5EF50     00     1866523305     816672812     1937724149     0     SYS$USERS     0     01@</my.sql     -2038272289          -265190056     0          9468268067     10420092918          00000003E8593000     6     N     VALID     0     2011-05-11/10:23:45     N     5          0     0     1.57848E+19     1.57848E+19     5/12/2011 4:39          0
    1) how do i re-write this legacy script? and what should be done to improve performance?
    2) Should i use PL/sql to re-write it?
    3) Also help in understanding why a simple update statement is doing so many buffer gets and reading , Is this Read consistency Trap as i'm not committing anywhere in between or it is actually doing so much of work.
    (assume dept table has cols emp_name and emp_id also)

    update emp set emp_name=(select emp_name from dept where emp.id=dept.emp_id);I guess that these are masked table names ? Nobody would have emp_name in a dept table.
    Can you re-format the output, using "code" tags ( [  or {  }
    Hemant K Chitale
    Edited by: Hemant K Chitale on May 12, 2011 12:44 PM

  • Need help for understanding the behaviour of these 2 queries....

    Hi,
    I need your help for understanding the behaviour of following two queries.
    The requirement is to repeat the values of the column in a table random no of times.
    Eg. A table xyz is like -
    create table xyz as
    select 'A' || rownum my_col
    from all_objects
    where rownum < 6;
    my_col
    A1
    A2
    A3
    A4
    A5
    I want to repeat each of these values (A1, A2,...A5) multiple times - randomly decide. I have written the following query..
    with x as (select my_col, trunc(dbms_random.value(1,6)) repeat from xyz),
    y as (select level lvl from dual connect by level < 6)
    select my_col, lvl
    from x, y
    where lvl <= repeat
    order by my_col, lvl
    It gives output like
    my_col lvl
    A1     1
    A1     3
    A1     5
    A2     1
    A2     3
    A2     5
    A3     1
    A3     3
    A3     5
    A4     1
    A4     3
    A4     5
    A5     1
    A5     3
    A5     5
    Here in the output, I am not getting rows like
    A1     2
    A1     4
    A2     2
    A2     4
    Also, it has generated the same set of records for all the values (A1, A2,...,A5).
    Now, if I store the randomly-decided value in the table like ---
    create table xyz as
    select 'A' || rownum my_col, trunc(dbms_random.value(1,6)) repeat
    from all_objects
    where rownum < 6;
    my_col repeat
    A1     4
    A2     1
    A3     5
    A4     2
    A5     2
    And then run the query,
    with x as (select my_col, repeat from xyz),
    y as (select level lvl from dual connect by level < 6)
    select my_col, lvl
    from x, y
    where lvl <= repeat
    order by my_col, lvl
    I will get the output, exactly what I want ---
    my_col ....lvl
    A1     1
    A1     2
    A1     3
    A1     4
    A2     1
    A3     1
    A3     2
    A3     3
    A3     4
    A3     5
    A4     1
    A4     2
    A5     1
    A5     2
    Why the first approach do not generate such output?
    How can I get such a result without storing the repeat values?

    If I've understood your requirement, the below will achieve it:
    SQL> create table test(test varchar2(10));
    Table created.
    SQL> insert into test values('&test');
    Enter value for test: bob
    old   1: insert into test values('&test')
    new   1: insert into test values('bob')
    1 row created.
    SQL> insert into test values('&test');
    Enter value for test: terry
    old   1: insert into test values('&test')
    new   1: insert into test values('terry')
    1 row created.
    SQL> insert into test values('&test');
    Enter value for test: steve
    old   1: insert into test values('&test')
    new   1: insert into test values('steve')
    1 row created.
    SQL> insert into test values('&test');
    Enter value for test: roger
    old   1: insert into test values('&test')
    new   1: insert into test values('roger')
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select lpad(test,(ceil(dbms_random.value*10))*length(test),test) from test;
    LPAD(TEST,(CEIL(DBMS_RANDOM.VALUE*10))*LENGTH(TEST),TEST)
    bobbobbobbobbobbobbobbobbobbob
    terryterry
    stevestevesteve
    rogerrogerrogerrogerrogerrogerrogerrogerrogerYou can alter the value of 10 in the SQL if you want the potential for a higher number of names.
    Andy

  • Need help in understanding FA acquisition via Internal Order

    Hi Gurus
    I need your help in understanding the FA acquisition via Internal Order. The process we are following is that we create AUC using the AUC asset class and enter that AUC number in the settlement rule while creating IO. Once IO budget is approved, we release the IO. Once GR is completed, we do the Invoice receipt for the PO, followed by settlement of IO against the AUC. Afterwards, we create fixed asset in AS01 and enter this asset number in the settlement rule for the AUC in AIAB and settle the AUC to Fixed asset for the costs.
    My question is that during all this process, I don't see the PO information in the AUC record. When I display AUC, under the Enviornment tab, I see the Purchase order link but when I click it, there is nothing in there. The reason could be that we are creating AUC seperatly and not from within the IO where it says create AUC. I am not sure what is the best way for the whole process.
    I would be thankful if you can guide me.
    Thanks,
    Shalu

    Hi
    Can someone please help me with this issue?
    Thanks,
    Shalu

  • Need help on understanding COLUMN_IID, COLUMN_ID and ROW_IID

    Dear All,
    I need some help in understanding the below three things.
    COLUMN_IID, COLUMN_ID and ROW_IID
    First let me write down the requirement :
    I need to keep track of the scores on various status change.
    In the design of the template, we have something called 'Company Objectives' and 'Team Objectives' and 'Individual Objectives'.
    And under every heading, there are some objectives and a score beside it.
    When the document is with the employee, then he/she decides the score (0-Not started and 5-Completed). And when the employee submits the document then it goes to manager. The manager may change the score against an objective.
    Now, the requirement is with every change of status and substatus, i need to take a note of the score. Is this value stored in any standard table. I checked the table HRHAP_FURTHER but i cannot
    When i check the Function Modules 'HRHAP_DOCUMENT_GET_DETAIL' and 'HRHAP_DOC_FURTHER_READ', then i see those values but against various ROW_IID and COLUMN_IID and COLUMN_ID. I need to know how to catch the ROW_IID and COLUMN_IDD and COLUMN_ID for a particular objective. And what is the concept of the ROW_IID and COLUMN_IID and COLUMN_ID.
    Please let me know if something is not very clear. I will try to give some more explaination.

    Hi,
    For the context, in case it was not clear in the original message, we are talking Performance Management.
    As you know the documents are based on appraisal templates. On document create this template is read and the different elements are generated. As we can have the same element type/id multiple times in a template/document we need something to uniquely identify them. This is done via the ROW_IIID.
    Then for each element we can define which columns we use. A column is identified with COLUMN_ID, which i9s unique when we are on template configuration level. But on document level this is not the case. Due to the Part Appraiser columns (PAPP/PFGT) being multiplied by the number of part appraisers in the document the COLUMN_ID is not unique anymore. So we need to give them also a unique ID, which is the COLUMN_IID.
    Thats the short answer, I will write a longer document on it one of these days in my blog.
    Regards and Groetjes,
    Maurice Hagen

  • Error Posting IDOC: need help in understanding the following error

    Hi ALL
    Can you please, help me understand the following error encountered while the message was trying to post a IDOC.
    where SAP_050 is the RFC destination created to post IDOCs
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Call Adapter
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>XIAdapter</SAP:Category>
      <SAP:Code area="IDOC_ADAPTER">ATTRIBUTE_IDOC_RUNTIME</SAP:Code>
      <SAP:P1>FM NLS_GET_LANGU_CP_TAB: Could not determine code page with SAP_050 Operation successfully executed FM NLS_GET_LANGU_CP_TAB</SAP:P1>
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>Error: FM NLS_GET_LANGU_CP_TAB: Could not determine code page with SAP_050 Operation successfully executed FM NLS_GET_LANGU_CP_TAB</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    Your help is greatly appreciated.............Thank you!

    Hi Patrick,
      Check the authorizations assigned to the user which you used in the RFC destinations, If there is no enough authorizations then it is not possible to post the idocs.
    Also Refer this Note 747322
    Regards,
    Prakash

  • E-Rows = NULL and A-Rows=42M? Need help in understanding why.

    Hi,
    Oracle Standard Edition 11.2.0.3.0 CPU Oct 2012 running on Windows 2008 R2 x64. I am using Oracle 10g syntax for WITH clause as the query will also run on Oracle 10gR2. I do not have a Oracle 10gR2 environment at hand to comment if this behaves the same.
    Following query is beyond me. It takes around 2 minutes to return the "computed" result set of 66 rows.
    SQL> WITH dat AS
      2          (SELECT 723677 vid,
      3                  243668 fid,
      4                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
      5                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt
      6             FROM DUAL
      7           UNION ALL
      8           SELECT 721850,
      9                  243668,
    10                  TO_DATE ('06.02.2013', 'dd.mm.yyyy'),
    11                  TO_DATE (' 22.03.2013', 'dd.mm.yyyy')
    12             FROM DUAL
    13           UNION ALL
    14           SELECT 723738,
    15                  243668,
    16                  TO_DATE ('16.03.2013', 'dd.mm.yyyy'),
    17                  TO_DATE ('  04.04.2013', 'dd.mm.yyyy')
    18             FROM DUAL)
    19      SELECT /*+ GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
    20        FROM dat
    21  CONNECT BY LEVEL <= maxdt - mindt + 1
    22  order by fid, vid, dtshow;
    66 rows selected.
    SQL>
    SQL> SELECT * FROM TABLE (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  9c4vma4mds6zk, child number 0
    WITH dat AS         (SELECT 723677 vid,                 243668 fid,
                TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
    TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt            FROM DUAL
    UNION ALL          SELECT 721850,                 243668,
       TO_DATE ('06.02.2013', 'dd.mm.yyyy'),                 TO_DATE ('
    22.03.2013', 'dd.mm.yyyy')            FROM DUAL          UNION ALL
        SELECT 723738,                 243668,                 TO_DATE
    ('16.03.2013', 'dd.mm.yyyy'),                 TO_DATE ('  04.04.2013',
    'dd.mm.yyyy')            FROM DUAL)     SELECT /*+
    GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
        FROM dat CONNECT BY LEVEL <= maxdt - mindt + 1 order by fid, vid,
    dtshow
    Plan hash value: 1865145249
    | Id  | Operation                              | Name | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
    |   0 | SELECT STATEMENT                       |      |      1 |        |     66 |00:01:54.64 |       |       |          |
    |   1 |  SORT UNIQUE                           |      |      1 |      3 |     66 |00:01:54.64 |  6144 |  6144 | 6144  (0)|
    |   2 |   CONNECT BY WITHOUT FILTERING (UNIQUE)|      |      1 |        |     42M|00:01:04.00 |       |       |          |
    |   3 |    VIEW                                |      |      1 |      3 |      3 |00:00:00.01 |       |       |          |
    |   4 |     UNION-ALL                          |      |      1 |        |      3 |00:00:00.01 |       |       |          |
    |   5 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    |   6 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    |   7 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    --------------------------------------------------------------------------------------------------------------------------If I take out one of the UNION queries, the query returns in under 1 second.
    SQL> WITH dat AS
      2          (SELECT 723677 vid,
      3                  243668 fid,
      4                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
      5                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt
      6             FROM DUAL
      7           UNION ALL
      8           SELECT 721850,
      9                  243668,
    10                  TO_DATE ('06.02.2013', 'dd.mm.yyyy'),
    11                  TO_DATE (' 22.03.2013', 'dd.mm.yyyy')
    12             FROM DUAL)
    13      SELECT /*+ GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
    14        FROM dat
    15  CONNECT BY LEVEL <= maxdt - mindt + 1
    16  order by fid, vid, dtshow;
    46 rows selected.
    SQL>
    SQL> SELECT * FROM TABLE (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  1d2f62uy0521p, child number 0
    WITH dat AS         (SELECT 723677 vid,                 243668 fid,
                TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
    TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt            FROM DUAL
    UNION ALL          SELECT 721850,                 243668,
       TO_DATE ('06.02.2013', 'dd.mm.yyyy'),                 TO_DATE ('
    22.03.2013', 'dd.mm.yyyy')            FROM DUAL)     SELECT /*+
    GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
        FROM dat CONNECT BY LEVEL <= maxdt - mindt + 1 order by fid, vid,
    dtshow
    Plan hash value: 2232696677
    | Id  | Operation                              | Name | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
    |   0 | SELECT STATEMENT                       |      |      1 |        |     46 |00:00:00.01 |       |       |          |
    |   1 |  SORT UNIQUE                           |      |      1 |      2 |     46 |00:00:00.01 |  4096 |  4096 | 4096  (0)|
    |   2 |   CONNECT BY WITHOUT FILTERING (UNIQUE)|      |      1 |        |     90 |00:00:00.01 |       |       |          |
    |   3 |    VIEW                                |      |      1 |      2 |      2 |00:00:00.01 |       |       |          |
    |   4 |     UNION-ALL                          |      |      1 |        |      2 |00:00:00.01 |       |       |          |
    |   5 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    |   6 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    26 rows selected.What I cannot understand is why the E-Rows is NULL for "CONNECT BY WITHOUT FILTERING (UNIQUE)" step and A-Rows shoots up to 42M for first case. The behaviour is the same for any number of UNION queries above two.
    Can anyone please help me understand this and aid in tuning this accordingly? Also, I would be happy to know if there are better ways to generate the missing date range.
    Regards,
    Satish

    May be, this?
    WITH dat AS
                (SELECT 723677 vid,
                        243668 fid,
                        TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
                        TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt
                   FROM DUAL
                 UNION ALL
                 SELECT 721850,
                        243668,
                       TO_DATE ('06.02.2013', 'dd.mm.yyyy'),
                       TO_DATE (' 22.03.2013', 'dd.mm.yyyy')
                  FROM DUAL
                UNION ALL
                SELECT 723738,
                       243668,
                       TO_DATE ('16.03.2013', 'dd.mm.yyyy'),
                       TO_DATE ('  04.04.2013', 'dd.mm.yyyy')
                  FROM DUAL)
           SELECT  vid, fid, mindt - 1 + LEVEL dtshow
             FROM dat
      CONNECT BY LEVEL <= maxdt - mindt + 1
          and prior vid = vid
          and prior fid = fid
          and prior sys_guid() is not null
      order by fid, vid, dtshow;
    66 rows selected.
    Elapsed: 00:00:00.03

  • Duplicate partitions, need help to understand and fix.

    So I was looking for a USB that I plugged in and went into /media/usbhd-sdb4/miles and realized all its contents was from my home directory.
    So I created a random file in my home directory to see if it would also update /media/usbhd-sdb4/miles , and it did.
    Can someone help me understand what is happening?
    Also if I can fuse sdb4 and sdb2 as one, and partition it as my home directory without losing its contents?
    Below is some information that I think would be helpful.
    Thank you.
    [miles]> cd /media/usbhd-sdb4/
    [usbhd-sdb4]> ls -l
    total 20
    drwx------ 2 root root 16384 May 28 14:16 lost+found
    drwx------ 76 miles users 4096 Oct 15 00:42 miles
    [usbhd-sdb4]> cd miles
    [miles]> pwd
    /media/usbhd-sdb4/miles
    [miles]>
    lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 465.8G 0 disk
    ├─sda1 8:1 0 19.5G 0 part
    └─sda2 8:2 0 446.2G 0 part
    sdb 8:16 0 465.8G 0 disk
    ├─sdb1 8:17 0 102M 0 part /media/usbhd-sdb1
    ├─sdb2 8:18 0 258.9M 0 part [SWAP]
    ├─sdb3 8:19 0 14.7G 0 part /
    └─sdb4 8:20 0 450.8G 0 part /media/usbhd-sdb4
    sr0 11:0 1 1024M 0 rom

    Check your udev rules...

  • Need help with understanding HD

    Just picked up a new HD video camera and it seems technology has played a trick on me.
    my Sony HD camcorder imorts the file as a .m2ts file
    When imported in Adobe Orgnizer Elements - preview looks skinny and "squished"
    It seems like the camcorder has a preference to video a whole bunch of stuff and then burn it to DVD in high def or normal resoluition - whereas I like to record by clips, organize them, and then add into Ppremier Elements as needed. So I really don't understand what to do or how to do it, to get the results i need?
    Do i need to manually convert each .m2ts file into an .avi?
    When I open a HD clip inpremier, the size is right, but preview is all blurry and jumpy?
    What is the best way for me to take one of my clips and have it ready to insert in a movie?
    Thanks - any help would be appreciated.

    This is aimed at Premiere Pro, but may help
    A link with many ideas about computer setup http://forums.adobe.com/thread/436215?tstart=0
    Work through all of the steps (ideas) listed at http://ppro.wikia.com/wiki/Troubleshooting
    If your problem isn't fixed after you follow all of the steps, report back with the DETAILS asked for in the FINALLY section, the questions at the end of the troubleshooting link... most especially the codec used... see Question 1

  • Need help in understanding the trace file

    Hi,
    I would need to understand
    - the large number of fetch for the query
    - SQL*Net message to client and SQL*Net message from clientbeing same
    - latch: cache buffers chains
    The issue I am experiencing is a 6x delay due to an unknown reason.
    Can somebody assist me?
    Thanks
    D
    SQL
    SELECT R_RO.*
    FROM
    (SELECT b.ID,b.ANSWERS,b.C_CLASSID,b.C_ID, b.FCI,b.FROM_ID,b.TCI,b.TO_ID,b.FFU, b.TFU,b.NOU,b.ISN,b.SC_NAME,
    b.DN_NAME,b.DN_ITEM_NUMBER,b.ISC, b.PRIORITY FROM VVP.RELATION b WHERE b.ID NOT IN (SELECT /*+ HASH_AJ */ SDE_DELETES_ROW_ID FROM VVP.D94 WHERE DELETED_AT IN (SELECT l.lineage_id FROM SDE.state_lineages l WHERE l.lineage_name = :source_lineage_name AND l.lineage_id <= :source_state_id) AND SDE_STATE_ID
    = :"SYS_B_0") UNION ALL SELECT a.ID,a.ANSWERS,a.C_CLASSID, a.C_ID,a.FCI,a.FROM_ID,a.TCI, a.TO_ID,a.FFU,a.TFU,a.NOU, a.ISN,a.SC_NAME,a.DN_NAME, a.DN_ITEM_NUMBER,a.ISC,a.PRIORITY FROM VVP.A94 a, SDE.state_lineages SL WHERE (a.ID, a.SDE_STATE_ID) NOT IN (SELECT /*+ HASH_AJ */ SDE_DELETES_ROW_ID,SDE_STATE_ID FROM VVP.D94 WHERE DELETED_AT IN (SELECT l.lineage_id FROM SDE.state_lineages l WHERE l.lineage_name = :source_lineage_name AND l.lineage_id <= :source_state_id) AND SDE_STATE_ID > :"SYS_B_1") AND a.SDE_STATE_ID = SL.lineage_id AND SL.lineage_name = :source_lineage_name AND SL.lineage_id <= :source_state_id ) R_RO WHERE (ID = :"SYS_B_2")
    call count cpu elapsed disk query current rows
    Parse 3911 2.30 2.14 0 0 0 0
    Execute 3911 2.47 2.43 0 0 0 0
    Fetch 3911 268.96 270.60 28 15696558 0 3911
    total 11733 273.73 275.18 28 15696558 0 3911
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 1031
    Rows Row Source Operation
    1 VIEW (cr=3966 pr=0 pw=0 time=84973 us)
    1 UNION-ALL (cr=3966 pr=0 pw=0 time=84968 us)
    1 HASH JOIN ANTI (cr=3963 pr=0 pw=0 time=84707 us)
    1 TABLE ACCESS BY INDEX ROWID CONNECTION (cr=4 pr=0 pw=0 time=123 us)
    1 INDEX UNIQUE SCAN R94_SDE_ROWID_UK (cr=3 pr=0 pw=0 time=88 us)(object id 7404)
    8586 VIEW VW_NSO_2 (cr=3959 pr=0 pw=0 time=112274 us)
    8586 NESTED LOOPS (cr=3959 pr=0 pw=0 time=103686 us)
    3661 INDEX RANGE SCAN LINEAGES_PK (cr=7 pr=0 pw=0 time=67 us)(object id 307740)
    8586 INDEX RANGE SCAN D94_PK1 (cr=3952 pr=0 pw=0 time=70624 us)(object id 1637355)
    0 HASH JOIN ANTI (cr=3 pr=0 pw=0 time=248 us)
    0 NESTED LOOPS (cr=3 pr=0 pw=0 time=71 us)
    0 TABLE ACCESS BY INDEX ROWID A94 (cr=3 pr=0 pw=0 time=68 us)
    0 INDEX RANGE SCAN A94_ROWID_IX1 (cr=3 pr=0 pw=0 time=65 us)(object id 129281)
    0 INDEX UNIQUE SCAN LINEAGES_PK (cr=0 pr=0 pw=0 time=0 us)(object id 307740)
    0 VIEW VW_NSO_1 (cr=0 pr=0 pw=0 time=0 us)
    0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
    0 INDEX RANGE SCAN D94_PK1 (cr=0 pr=0 pw=0 time=0 us)(object id 1637355)
    0 INDEX UNIQUE SCAN LINEAGES_PK (cr=0 pr=0 pw=0 time=0 us)(object id 307740)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    SQL*Net message to client 3911 0.00 0.00
    SQL*Net message from client 3911 0.49 72.42
    latch: cache buffers chains 1434 0.00 0.07
    latch: shared pool 1 0.00 0.00
    db file sequential read 28 0.14 0.31
    latch free 15 0.02 0.04
    latch: row cache objects 1 0.00 0.00
    log file switch completion 4 0.98 1.64

    ssddgreg wrote:
    Hi Randolf,
    thank you for your excellent interpretation! I have Oracle DBMS 10.2.0.3 Ent Edition deployed on Sun Solaris (64-bit), FIRST_ROWS and CURSOR_SHARING = SIMILAR.
    I also checked the table D94.SDE_DELETES_ROW_ID about indexes, it has 3 indexes on the same column:
    D94_IX1, NONUNIQUE
    D94_IX2, UNIQUE
    D94_PK1, UNIQUE
    This table is a system table from another middle-tier application and all the DXX tables are configured like that.Is this a third-party vendor application for that you don't have any control over the schema? Because your description of the indexes looks like a potential case of massive over-indexing, increasing the workload required to maintain all this indexes. Very likely some of these indexes are redundant and could be covered by a fewer number of indexes.
    Besides that my comment regarding the execution plan was probably not clear enough - what I meant to say is that the HASH_AJ hint prevents the optimizer from doing the clever things with the predicates that I described.
    So in principle the question is: What execution plan do you get if you omit the HASH_AJ hints? And how many consistent gets requires this new plan at execution time? You might need to add a NL_AJ hint instead to achieve what I've described, but it would be interesting to see in first place what execution plan is generated without any hints.
    Some other comments:
    FIRST_ROWS optimizer mode: Does this application require you to use the FIRST_ROWS optimizer mode? Because in principle, if you have an application that actually retrieves most of the time only the first few rows of a larger result set, then you should use the FIRST_ROWS(n) optimizer mode instead. The FIRST_ROWS optimizer mode is deprecated since Oracle 9i if I remember correctly and has some odd side effects on execution plans, in particular if the SQL contains an ORDER BY clause.
    If your application usually processes all rows from a given result set, using the default optimizer mode ALL_ROWS is more appropriate - using FIRST_ROWS as a band-aid because with ALL_ROWS things are slower only shows that there is something wrong that should be addressed in a different way (by investigating why the ALL_ROWS mode doesn't arrive at a suitable execution plan as first activity).
    CURSOR_SHARING=SIMILAR: Note that CURSOR_SHARING = SIMILAR has some other side effects (and bugs). Oracle has recently announced on My Oracle Support (see document 1169017.1) that CURSOR_SHARING=SIMILAR will be deprecated (no longer supported) in Oracle 12. See this note also for a description why this setting can be problematic.
    Of course, if you don't have any control over a vendor application and it works fine and has been optimized for these settings (FIRST_ROWS, CURSOR_SHARING=SIMILAR) then there is not much you can do (or need to do) about that.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    Co-author of the "OakTable Expert Oracle Practices" book:
    http://www.apress.com/book/view/1430226684
    http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684

  • I need help with just about everything...........

    I realize that this is a site dedicated to all things Apple, but I am hoping I can't get some honest and impartial help on building the best home network and entertainment system.
    I currently have an XP Media Center desktop computer in an office of my home where all of the network stuff is at and is connected via ethernet. I also have a Mac G4 MDD 1.0 that I am using in another room for Adobe Creative Suite to do my own promotional items and I have about a 50' Cat 6 cable running back to the router in the office. I also have a Vista laptop that is connected wirelessly but believe it is only a "g" network? I don't really use this a whole lot but take it out to job sites and such so there is information I need to send back and forth to my external drives hooked up via ethernet to my router. I am in the process of getting a new MacBook Pro as well.
    I have a PS3 hooked up to my home entertainment center but unsure of how to make everything work the way that I want. Here is what I would like to be able to do. 1) Networking of all computers and external drives, 2) Operate my Mac G4 Wirelessly, 3) Addition of internal bluetooth to G4 so I can get a wireless keyboard and mouse, 4) Ability to play iTunes from external hard drives on home entertainment center, 3) Ability to stream media from internet and hard drives to HDTV in home entertainment center
    I am looking to upgrade my router to dual band with either the Airport Extreme or the Netgear WNDR3700. I don't know which to get and have found about the same amount of pros and cons for each on the web. Any suggestions and why?
    Do I need an Apple TV or a Netgear WNHDEB111 or other Digital Media Reciever???
    What else would you recommend to accomplish my goals and why do you recommend the part and brand that you do.
    Thank you in advance for any and all assistance I can get.

    I getting a message saying[ a blackberry identity update is needed. Would you like to install it now yes or no /when I click yes I also getting this message[ BlackBerry identity installation failed please try again later. Keep getting this same message over and over

  • Need help with info about the possibility of altering images when saved in a different format

    I am very new to the design world.  I have a printing business and received and image to be printed in Adobe Acrobat (I think).  I needed to resize the image to be printed and took it into Photoshop CS6 where I cropped the edges.  I then resave it as a PDF and took into a printer software, Fiery.  Fiery has an Impose function where you can repeat an image to layout it on the correct paper size.  It essentially just copies the original image several times depending on the layout.  This particular image I repeated to 3 up and 1 across.  When printed it came out with a magenta streak through the center image only.  I have had the machine checked and they are adamant it's not a printer issue but a software issue.  That once I altered the original image by resizing it that I altered the image and that I can't expect it to come out as seen on the screen.  I've printed it all sorts of different ways and it only occurs with the 3 up 1 across image.  I know that images can be changed when saved from program to program but this situation does not make sense that it is only the one area and layout that this shows up. I would appreciate any help, input, answers that anyone has.  Again, I'm new to this and would like to understand if this is a potential for further printing jobs so that I can do what I need to avoid this situation. 
    Thank you for your time.
    Jessica

    Jessica,
    Without knowing how the original graphic was created it is hard to tell you how to handle it. What is the name of the file? (Only the file suffix is important). If the file's origin was in a drawing program and/Or InDesign, then the document should be handled differently if the document is a raster image best handled by a program like Photoshop.

Maybe you are looking for

  • Cell border when editing

    If I click a cell that is in an editing session I am getting a blue border that sometimes is missing either a left side or right side. But if I use a tab to go from one cell to another then the border is shown properly. Any idea what that mignt be? T

  • Error In Asset Migration

    I am entering transfer posting with transaction OASV and and the error F5842 is stopping me from carrying out the posting. It says as follows Balance in 1 currencies Message no. F5842 Diagnosis Balances have been found in 1 currencies with the follow

  • Power Failure during 10.6.8 update

    I had just started the 10.6.8 and other software update when my circuit breaker blew. Got power back, Mac Pro started up OK. I'm on it now. I'm still showing 10.6.7 and the software updater shows all the same software I'm supposed to update as before

  • Using Business Connector to convert a file to XML

    Hi, I have been asked to see if a flat file created by an abap program can be converted to XML format using Business Connector (4.7). I have looked in the tutorial and the pdf documents but can not see how this is done. Please could someone give me s

  • XML Validation with XSD in java mapping

    Hi experts, I have created an interface to send differents messages between bussines system, the bussiness system receiver is put in the message, to get this value I have a configuration file indicating the path of this field for each message type. I