Does moving data require a re-index?

I have just moved 0.5TB of data off my C: drive and onto a new drive I have installed, for the sole purpose of managing/storing photos. The new drive is drive F:
Do I need to tell Bridge of this chage and do any form of re-indexing?
In addition, do I need to let Photoshop or Lightroom know too?
Thanks for your help.

Re-indexing is necessary if you want to use Find.  Re-indexing is done when visiting the folder or you can re-index by doing a search from the top of your index tree with the box checked to look an non-iindexed folders also.
You may run into some permssion problems in trying to write metadata to an external drive.
Add Drive F to your favorites list for easy access.

Similar Messages

  • Error 1074395241: The template descriptor does not contain data required for rotation-invariant matching.

    Hello all,
    I am using the IMAQ Match Pattern 4 to detect the rotation angle of a template image. However, it shows the error: "Error 1074395241: The template descriptor does not contain data required for rotation-invariant matching." What is the problem exactly? How to solve this? The details are explained below.
    My project is a little bit complicated. Part of the block diagram containing the IMAQ Match Pattern 4 is shown below:
    The source image is a series of frames of images read from an AVI video (I used a for loop to process the images frame by frame). The template image is a selected region of the first frame. So it means, the user selected the object of ineterst in the first frame of the video, and in each of the following frames, we need to find the matched object of interest & determine its rotation angle. When I run the block diagram shown above, it does not have any error. However, it shows the rotation angle as zero no matter what it "really" is. Therefore, I changed the block diagram by adding the parameters, shown below:
    But in this case, when I run it, it shows the error that I have indicated in the subject line.
    If you need more details about my project to identify the problem, please let me know.
    Thanks in advance.
    Solved!
    Go to Solution.

    -Please go through pattern matching example which comes along with labview fiirst
    Go to labview Help>>Find Examples and you can search for example.
    -You have create template with angle range and what type of pattern matching you want use.
    -For this you have to use IMAQ Learn Pattern before using IMAQ Match Pattern 4
    Refer :http://zone.ni.com/reference/en-XX/help/370281U-01/imaqvision/imaq_match_pattern_4/
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • What are required Oracle products for moving data from IBM IMS/DB(mainframe) to Oracle environment?

    I am z/OS system programmer, our company is using IMS as its main OLTP database. We are investigating moving data off the mainframe for data warehousing and online fraud detection. One option is using IBM InfoSphere CDC and DB2, another option is using IMS connect and writing our own program, I am wondering what is the oracle solution for this kind of issue?
    I am not oracle technician but I googled and find out Oracle has some product like Oracle Legacy Adapters, OracleAS CDC Adapter and Oracle Connect on z/OS, however I didn't find them in Oracle site(https://edelivery.oracle.com/), I don't know whether these products are deprecated or not?!
    I would very much appreciate any help or guidance you are able to give me

    Thank you for responding.
    I've considered dumping the data into a flat file and using SQL*Loader to import as you suggest but this would require some scripting on a per-table basis. Again: all I want to do is to copy the contents of a table from one database to another. I do not think I should have to resort to creating my own dump and load scripts in order to do that. However I agree with you that this type of solution may be my final solution.
    I've tried the db link solution. It was just a slow as the 'imp' solution for some reason. Don't know why. The tables are rather large (3 tables of a few Gb each) and therefore require intermediate commits when loaded. Otherwise the rollback segment will run out of space. So the 'db link solution' is really a PL/SQL script with a commit for each x records.
    I think Oracle is making it a bit difficult for me to copy the contents of a table from one database to another and to do it efficiently. Perhaps I'm missing something here?

  • Does moving SP code from DB to App Server help scale an application?

    We have an application developed using Oracle 10g Forms / 10g DB? All our processing is done using SPs. So they all run in the DB server. Even our Inserts/updates/deletes to a table are handled by SPs.
    The site with the maximum simultaneous users (i.e. concurrent users) is one with 100 concurrent users.
    We have prospective customer whose requirement is 300 concurrent users. Our application won't be able to handle it since the DB server is a single processor server with limited memory.
    One suggestion was move the SPs to the App Server by moving them to the Form. Since OAS has a PL engine they will run in the App Server and hence remove the workload of the DB.
    I don't buy this. My point is, even if SPs are moved to the app. server still the SQLs will run in the DB server, right?
    So what is the advantage?

    christian, I just modified the original post thinking nobody will reply since it's very long. Thanks a lot for the reply. For others and myself also here is my original question.
    I have a problem like this: Take this scenario. We have a TELCO app. It is an E-Business Web Application (i.e. Dynamic Web Site) developed using ASP.Net/C#. App. Server is IAS and DB is Oracle 10g. IAS and the DB reside in 2 servers. Both are single processor servers.
    The maximum simultaneous user load is 500. i.e. 500 users can be working in the system at one time.
    Now suppose 500 users login at the same time and perform 500 different operations (i.e. querying, inserts, updates, deletes). Now all 500 operations will go to the App Server. From there the C# code will perform everything using Oracle stored procedures (SP). I.e. we first make a connection to the DB, SP is invoked by passing parameters, it will perform the operation in the DB, send the output to the App. Server C# code and we will close the Oracle connection (in App Server. C# code).
    Now, the 500 operations will obviously have to wait in a queue and the SQLs will be processed in the DB server.
    Now, question is how does CONNECTION POOLING help in this situation?
    I have been told that the above method of using DB SP to perform processing will make the whole system very slow since all the load of the processing has to borne by the DB Server and since DB Operations involve disk I/O it will be very slow. They say you cannot SCALE the application with this DB Processing mode and you have to move to App. Server processing mode in order to scale your application. I.e. If the number of users increases to 1000 our application won’t be able to handle it and will get very slow.
    What they suggest is to move all the processing to the App. Server (i.e. App. Svr. Memory). They also say that CONNECTION POOLING can help even further to improve the performance.
    I have some issues with this. First of all to get all the data to the App server memory for each user process from the DB will not only require disk I/O, it will also involve a network trip. That will obviously take time. Again the DB requests to fetch the data will have to wait in the DB queue. Then we do the processing in the App. Server memory and then we have to again write to the DB server which again will require a network trip and disk I/O. According to my thinking won’t this take MORE TIME than doing it in the DB server??
    Also how can CONNECTION POOLING help. In C# we OPEN a connection ONLY just before run the SP or get the data and we close the connection just after the operation is over.
    I don’t’ see how CONNECTION POOLING can improve anything?
    I also don’t see how moving data into the App. Server from the DB Server can improve the performance. I think that will only decrease performance.
    Am I right or have I missed something?
    Edited by: user12240205 on Nov 17, 2010 2:04 AM

  • Database Copy Doesn't find tables for moving data

    I'm running SQL Dev 3.1.7 on a Win7-64 PC against Oracle 11g. Using the Database Copy function to move and update tables from one instance to another. The application seems to work fine, creating all of the tables, but in the script when it tries to move the data, for some tables it fails as follows:
    Moving Data for object MY_TABLE_NAME
    Unable to perform batch insert.
    MY_TABLE_NAME ORA-00942: table or view does not exist
    Earlier in the script it dropped and then recreated the table in the target instance. The table does exist in the source instance. This occurred for 24 tables out of 105.
    Any ideas what is going on?
    Edited by: user12200489 on May 25, 2012 9:47 AM

    according to the log, it says the table got created. Here's the log entries as they pertain to our APPOINTMENT table ...
    DROP TABLE "APPOINTMENT" cascade constraints;
    table "APPOINTMENT" dropped.
    DROP SEQUENCE "APPOINTMENT_SEQ";
    sequence "APPOINTMENT_SEQ" dropped.
    -- DDL for Sequence APPOINTMENT_SEQ
    CREATE SEQUENCE "APPOINTMENT_SEQ" MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 17721 CACHE 20 NOORDER NOCYCLE ;
    sequence "APPOINTMENT_SEQ" created.
    -- DDL for Table APPOINTMENT
    CREATE TABLE "APPOINTMENT" ("APPOINTMENT_ID" NUMBER(19,0), "PRACTICE_PATIENT_ID" NUMBER(19,0), "PRACTICE_PET_OWNER_ID" NUMBER(19,0), "STAFF_MEMBER_ID" NUMBER(19,0), "APPT_STATUS_ENUM_NAME" VARCHAR2(32 BYTE), "EXT_PRACTICE_APPOINTMENT_ID" VARCHAR2(255 BYTE), "CONFIRMATION_KEY" VARCHAR2(255 BYTE), "REASON" VARCHAR2(255 BYTE), "EXT_REASON" VARCHAR2(255 BYTE), "EXT_SECONDARY_REASON" VARCHAR2(255 BYTE), "NOTE" BLOB, "APPOINTMENT_DATE" TIMESTAMP (6), "CREATED_DATE" TIMESTAMP (6), "CREATED_BY" VARCHAR2(255 BYTE), "LAST_MODIFIED_DATE" TIMESTAMP (6), "LAST_MODIFIED_BY" VARCHAR2(255 BYTE), "CONFIRMATION_TIME" TIMESTAMP (6)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "VET2PET_T" LOB ("NOTE") STORE AS BASICFILE ( TABLESPACE "VET2PET_T" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
    table "APPOINTMENT" created.
    It even truncates the table before attempting to load the data ...
    TRUNCATE TABLE "APPOINTMENT";
    table "APPOINTMENT" truncated.
    But when it goes to move the data ...
    --- START --------------------------------------------------------------------
    Moving Data for object APPOINTMENT
    Unable to perform batch insert.
    APPOINTMENT ORA-00942: table or view does not exist
    --- END --------------------------------------------------------------------
    There is a lookup table involved but at the time of moving the data none of the foreign keys have been enabled.
    All the indices for the table get created with no issue and here is the log entry for when the constraints are created and enabled ...
    -- Constraints for Table APPOINTMENT
    ALTER TABLE "APPOINTMENT" ADD CONSTRAINT "PK_APPOINTMENT" PRIMARY KEY ("APPOINTMENT_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "VET2PET_I" ENABLE;
    ALTER TABLE "APPOINTMENT" MODIFY ("CREATED_BY" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("CREATED_DATE" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("CONFIRMATION_KEY" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("EXT_PRACTICE_APPOINTMENT_ID" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("APPT_STATUS_ENUM_NAME" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("PRACTICE_PET_OWNER_ID" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("PRACTICE_PATIENT_ID" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("APPOINTMENT_ID" NOT NULL ENABLE);
    ALTER TABLE "APPOINTMENT" MODIFY ("APPOINTMENT_DATE" NOT NULL ENABLE);
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    table "APPOINTMENT" altered.
    Any ideas?
    thanks.

  • Why does not my query use an index?

    I have a table with some processed rows (state: 9) and some unprocessed rows (states: 0,1,2,3,4).
    This table has over 120000 rows, but this number will grow.
    Most of the rows are processed and most of them also contain a group id. Number of groups is relatively small (let's assume 20).
    I would like to obtain the oldest some_date for every group. This values has to be outer joined to a on-line report (contains one row for each group).
    Here is my set-up:
    Tested on: 10.2.0.4 (Solaris), 10.2.0.1 (WinXp)
    drop table t purge;
    create table t(
      id number not null primary key,
      grp_id number,
      state number,
      some_date date,
      pad char(200)
    insert into t(id, grp_id, state, some_date, pad)
    select level,
         trunc(dbms_random.value(0,20)),
            9,
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 120000;
    insert into t(id, grp_id, state, some_date, pad)
    select level + 120000,
         trunc(dbms_random.value(0,20)),
            trunc(dbms_random.value(0,5)),
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 2000;
    commit;
    exec dbms_stats.gather_table_stats(user, 'T', estimate_percent=>100, method_opt=>'FOR ALL COLUMNS SIZE 1');
    Tom Kyte's printtab
    ==============================
    TABLE_NAME                    : T
    NUM_ROWS                      : 122000
    BLOCKS                        : 3834I know, this could be easily solved by fast refresh on commit materialized view like this:
    select
      grp_id,
      min(some_date),
    from
      t
    where
      state in (0,1,2,3,4)
    grpup by
      grp_id;+ I have to create log on (grp_id, some_date, state)
    Number of rows with active state will be always relatively small. Let's assume 1000-2000.
    So my another idea was to create a selective index. An index which would contain only data for rows with an active state.
    Something like this:
    create index fidx_active on t ( 
      case state 
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end,
      case state
        when 0 then some_date
        when 1 then some_date
        when 2 then some_date
        when 3 then some_date
        when 4 then some_date
      end) compress 1; so a tuple (group_id, some_date) is projected to tuple (null, null) when the state is not an active state and therefore it is not indexed.
    We can save even more space by compressing 1st expression.
    analyze index idx_grp_state_date validate structure;
    select * from index_stats
    @pr
    Tom Kyte's printtab
    ==============================
    HEIGHT                        : 2
    BLOCKS                        : 16
    NAME                          : FIDX_ACTIV
    LF_ROWS                       : 2000 <-- we're indexing only active rows
    LF_BLKS                       : 6 <-- small index: 1 root block with 6 leaf blocks
    BR_ROWS                       : 5
    BR_BLKS                       : 1
    DISTINCT_KEYS                 : 2000
    PCT_USED                      : 69
    PRE_ROWS                      : 25
    PRE_ROWS_LEN                  : 224
    OPT_CMPR_COUNT                : 1
    OPT_CMPR_PCTSAVE              : 0Note: @pr is a Tom Kyte's print table script adopted by Tanel Poder (I'm using Tanel's library) .
    Then I created a query to be outer joined to the report (report contains a row for every group).
    I want to achieve a full scan of the index.
    select
      case state -- 1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end grp_id,
      min(case state --second expression
            when 0 then some_date
            when 1 then some_date
            when 2 then some_date
            when 3 then some_date
            when 4 then some_date
          end) as mintime
    from t 
    where
      case state --1st expression: at least one index column has to be not null
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end is not null
    group by
      case state --1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end;-------------
    Doc's snippet:
    13.5.3.6 Full Scans
    A full scan is available if a predicate references one of the columns in the index. The predicate does not need to be an index driver. A full scan is also available when there is no predicate, if both the following conditions are met:
    All of the columns in the table referenced in the query are included in the index.
    At least one of the index columns is not null.
    A full scan can be used to eliminate a sort operation, because the data is ordered by the index key. It reads the blocks singly.
    13.5.3.7 Fast Full Index Scans
    Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint. A fast full scan accesses the data in the index itself, without accessing the table. It cannot be used to eliminate a sort operation, because the data is not ordered by the index key. It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized.
    You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes.
    A fast full scan is faster than a normal full index scan in that it can use multiblock I/O and can be parallelized just like a table scan.
    So the question is: Why does oracle do a full table scan?
    Everything needed is in the index and one expression is not null, but index (fast) full scan is not even considered by CBO (I did a 10053 trace)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     85 |     20 |00:00:00.11 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |   2000 |00:00:00.10 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
                  THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL)Let's try some minimalistic examples. Firstly with no FBI.
    create index idx_grp_id on t(grp_id);
    select grp_id,
           min(grp_id) min
    from t
    where grp_id is not null
    group by grp_id;
    | Id  | Operation             | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    |   1 |  HASH GROUP BY        |            |      1 |     20 |     20 |00:00:01.00 |     244 |    237 |
    |*  2 |   INDEX FAST FULL SCAN| IDX_GRP_ID |      1 |    122K|    122K|00:00:00.54 |     244 |    237 |
    Predicate Information (identified by operation id):
       2 - filter("GRP_ID" IS NOT NULL)This kind of output I was expected to see with FBI. Index FFS was used although grp_id has no NOT NULL constraint.
    Let's try a simple FBI.
    create index fidx_grp_id on t(trunc(grp_id));
    select trunc(grp_id),
           min(trunc(grp_id)) min
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id);
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.94 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)Again, index (fast) full scan not even considered by CBO.
    I tried:
    alter table t modify grp_id not null;
    alter table t add constraint trunc_not_null check (trunc(grp_id) is not null);I even tried to set table hidden column (SYS_NC00008$) to NOT NULL
    It has no effect, FTS is still used..
    Let's try another query:
    select distinct trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    | Id  | Operation             | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH UNIQUE          |             |      1 |     20 |     20 |00:00:00.85 |     244 |
    |*  2 |   INDEX FAST FULL SCAN| FIDX_GRP_ID |      1 |    122K|    122K|00:00:00.49 |     244 |
    Predicate Information (identified by operation id):
       2 - filter("T"."SYS_NC00008$" IS NOT NULL)Here the index FFS is used..
    Let's try one more query, very similar to the above query:
    select trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.86 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |    122K|    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)And again no index full scan..
    So my next question is:
    What are the restrictions which prevent index (fast) fullscan to be used in these scenarios?
    Thank you very much for your answers.
    Edited by: user1175494 on 16.11.2010 15:23
    Edited by: user1175494 on 16.11.2010 15:25

    I'll start off with the caveat that i'm no Johnathan Lewis so hopefully someone will be able to come along and give you a more coherent explanation than i'm going to attempt here.
    It looks like the application of the MIN function against the case statement is confusing the optimizer and disallowing the usage of your FBI. I tested this against my 11.2.0.1 instance and your query chooses the fast full scan without being nudged in the right direction.
    That being said, i was able to get this to use a fast full scan on my 10 instance, but i had to jiggle the wires a bit. I modified your original query slightly, just to make it easier to do my fiddling.
    original (in the sense that it still takes the full table scan) query
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation        | Name | Starts | E-Rows | A-Rows |      A-Time   | Buffers |
    |   1 |  HASH GROUP BY        |       |      2 |      33 |       40 |00:00:00.07 |    7646 |
    |*  2 |   TABLE ACCESS FULL| T       |      2 |      33 |     4000 |00:00:00.08 |    7646 |
    Predicate Information (identified by operation id):
       2 - filter((CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
               THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL AND
               CASE "STATE" WHEN 0 THEN "SOME_DATE" WHEN 1 THEN "SOME_DATE" WHEN 2 THEN
               "SOME_DATE" WHEN 3 THEN "SOME_DATE" WHEN 4 THEN "SOME_DATE" END  IS NOT
               NULL))
    modified version where we prevent the MIN function from being applied too early, by using ROWNUM
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
      and rownum > 0
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY           |            |      2 |     20 |     40 |00:00:00.01 |      18 |
    |   2 |   VIEW                |            |      2 |     33 |   4000 |00:00:00.07 |      18 |
    |   3 |    COUNT           |            |      2 |      |   4000 |00:00:00.05 |      18 |
    |*  4 |     FILTER           |            |      2 |      |   4000 |00:00:00.03 |      18 |
    |*  5 |      INDEX FAST FULL SCAN| FIDX_ACTIVE |      2 |     33 |   4000 |00:00:00.01 |      18 |
    Predicate Information (identified by operation id):
       4 - filter(ROWNUM>0)
       5 - filter(("T"."SYS_NC00006$" IS NOT NULL AND "T"."SYS_NC00007$" IS NOT NULL))

  • When I migrate from my old mac to a new mac does the data remain on the old mac

    when I migrate from my old mac to a new mac does the data remain on my old mac

    Yes. You are only Copying the Data not moving it.

  • Moving Data from Normal table to History tables

    Hi All,
    I'm in the process of moving data from normal tables to
    History tables.
    It can be some sort of a procedure which should be a cron job running at night.
    My aim is to move data say 1.5 yrs or 2yrs old data to History tables.
    What aspects i need to check when moving data. And how can i write a procedure for this requirement.
    The schema is same in both the normal table and history table.
    It has to be a procedure based on particular field RCRE_DT.
    If the rcre_dt is above 2 yrs the data needs to be moved to HIS_<table>.
    I have to insert record in to HIS_table and simultaneously delete record from the normal table.
    This is in Production system and the tables are quite big.
    Pls do find enclosed the attached sample schema for Normal table and HIS_<table>.
    If i want to automate this script as a Cron job for similarly other History tables
    how am i to do it in a single procedure assuming the procedure for moving the data is the same procedure.
    Thanks for ur help in advance.
    SQL> DESC PXM_FLT;
    Name Null? Type
    RCRE_USER_ID NOT NULL VARCHAR2(15)
    RCRE_DT NOT NULL DATE
    LCHG_USER_ID VARCHAR2(15)
    LCHG_DT DATE
    AIRLINE_CD NOT NULL VARCHAR2(5)
    REF_ID NOT NULL VARCHAR2(12)
    BATCH_DT NOT NULL DATE
    CPY_NO NOT NULL NUMBER(2)
    ACCRUAL_STATUS NOT NULL VARCHAR2(1)
    FLT_DT NOT NULL DATE
    OPERATING_CARRIER_CD NOT NULL VARCHAR2(3)
    OPERATING_FLT_NO NOT NULL NUMBER(4)
    MKTING_CARRIER_CD VARCHAR2(3)
    MKTING_FLT_NO NUMBER(4)
    BOARD_PT NOT NULL VARCHAR2(5)
    OFF_PT NOT NULL VARCHAR2(5)
    AIR_CD_SHARE_IND VARCHAR2(1)
    UPLOAD_ERR_CD VARCHAR2(5)
    MID_PT1 VARCHAR2(5)
    MID_PT2 VARCHAR2(5)
    MID_PT3 VARCHAR2(5)
    MID_PT4 VARCHAR2(5)
    MID_PT5 VARCHAR2(5)
    PAX_TYPE VARCHAR2(3)
    PAY_PRINCIPLE VARCHAR2(1)
    SQL> DESC HIS_PXM_FLT;
    Name Null? Type
    RCRE_USER_ID NOT NULL VARCHAR2(15)
    RCRE_DT NOT NULL DATE
    LCHG_USER_ID VARCHAR2(15)
    LCHG_DT DATE
    AIRLINE_CD NOT NULL VARCHAR2(5)
    REF_ID NOT NULL VARCHAR2(12)
    BATCH_DT NOT NULL DATE
    CPY_NO NOT NULL NUMBER(2)
    ACCRUAL_STATUS NOT NULL VARCHAR2(1)
    FLT_DT NOT NULL DATE
    OPERATING_CARRIER_CD NOT NULL VARCHAR2(3)
    OPERATING_FLT_NO NOT NULL NUMBER(4)
    MKTING_CARRIER_CD VARCHAR2(3)
    MKTING_FLT_NO NUMBER(4)
    BOARD_PT NOT NULL VARCHAR2(5)
    OFF_PT NOT NULL VARCHAR2(5)
    AIR_CD_SHARE_IND VARCHAR2(1)
    UPLOAD_ERR_CD VARCHAR2(5)
    MID_PT1 VARCHAR2(5)
    MID_PT2 VARCHAR2(5)
    MID_PT3 VARCHAR2(5)
    MID_PT4 VARCHAR2(5)
    MID_PT5 VARCHAR2(5)
    PAX_TYPE VARCHAR2(3)
    PAY_PRINCIPLE VARCHAR2(1)

    Hi All,
    Thanks for ur valuable suggestion.But can u explain me bit more on this as i'm still confused about switching between partitoned tables and temporary table.Suppose if i have a table called PXM_FLT and an correspoding similar table named HIS_PXM_FLT.How can i do the partitioning shd i do the partitioning on the normal table or HIS_PXM_FLT.i do have a date field for me to partition based on range.Can u pls explain why shd i again create a temp.table.What's the purpose.Now the application is designed in such a way that old records have to be moved to HIS_PXM_FLT.can u pls highlight more on this.Your suggestions are greatly appreciated.As i'm relatively new to this partitioning technique i'm bit confused on how it works.But i came to understand Partitioning is a better operation than the normal insert or delte as it is more data intensive as millions of record need to be moved.Thanks for feedback and ur precious time.

  • Why does moving the cursor peg the cpu in numbers?

    Why does moving the cursor, even by one cell, peg the cpu in numbers?

    Hello
    Before posting yesterdays, I tested with a table of 4435 x 8 cells on the iMac described below.
    I will be more precise than yesterdays.
    Numbers recalculate the entire document after every change. A single new character is sufficient.
    Twenty years ago, AppleWorks designers were kind enough to recalculate only what really needed.
    It seems that new developers are not relying upon intelligence but upon processor's brute force.
    Their code will be efficient … on the machines which will be available within at least five years.
    Alas, at this time there will no longer be compatible with the operating system.
    Given this awful coding, when AutoSave apply, it must write on disk an entire new index.xml file.
    Same thing when Versions apply.
    Under SnowLeopard these two features don't strike.
    If you wish, you may send your 'offending' document to my mailbox so I will be able to check the way it behave here. Don’t worry, I don't take care of what is really stored, only of the doc's behavior.
    Click my blue name to get my address.
    Yvan KOENIG (VALLAURIS, France) jeudi 12 janvier 2012
    iMac 21”5, i7, 2.8 GHz, 12 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.2

  • HCM Time and Attendance - data requirements

    Can anyone help with what data is required from an external time and attendance collection application.
    E.G. First Name, Last Name, Emploee Code, Location, Clock in, Clock out and the big one is an 'hours calculated' data required?
    Regards
    Ray

    you would some javascript to refresh
    but this works in standard without doing any extra coding
    or yoiu can check this
    The flag event.target.dirty should be set to true at an appropriate
    position (probably form:ready event) for adobe reader versions < 9.
    Given below is a sample (javascript):
    if ( xfa.host.version < 9) { //handles 8.x.x and previous
        event.target.dirty = true;
    ie
    During the initialization of a PDF form, Since 'dirtyState' is a
    property of Reader, and when we open a PDF form, reader would set that
    property as "false" automatically. That's why the dirty state could not
    be changed to true in some event (like, initialize or Form:readey).
    Inorder to set the dirty state to ture, we need to set it after the
    initialization of the PDF form. Otherwise, the dirty state would be set
    as false during the initialization by default.
    The solution would be that they could try this script :
    ContainerFoundation_JS.setGlobalValue(event.target, "saveDirtyState",
    true);
    SaveDirtyState is a property used by ZCI code to judge if a PDF form is
    dirty or not.

  • Moving Calendar to a New Computer & Moving Data: Downloads and Add-ons

    Moving Calendar to a New Computer & Moving Data: Downloads and Add-ons
    The handle says is all. Totally lost here & in a virtual house of cards. Please first see screen-shot.
    I have the Mozilla calendar (Lightening?) program on my computer. Is this screen-shot Thunderbird or is it an add-on called Lightening that was added on to Thunderbird? I don't even recall how I acquired in a few years ago. Whatever it is I find in invaluable. Indispensable. Brilliantly conceived. A superb creation.
    Notice that I have several calendars in different colors for different purposes and a long task agenda on the right. Wonderful program!! I absolutely love it! I can not afford to loose these calenders and this data. However, big caveat, entrusting my life (our lives) to it has become extremely dangerous for it's a virtual house of cards upon the purchase of a new computer. The move-over is more like a career most of us already have one. Need anyone really wonder why PC sales are down? Res ipsa loquitur.
    FIRST ISSUE:
    I must move this/these Mozilla program/s and calendars over to the new computer but I don't have a clue where to begin. What must I download? Thunderbird? What must I add-on? Lightening? Is this a suite of programs? All I care about are my calendars and task agendas that you see in the screen-shot -- not any email programs etc. Just this personal management system that you see.
    SECOND ISSUE:
    After I downloading, installing, and adding-on what I need, how do I then move (migrate) my profile (data) over to new computer?
    Please know that I do know where that 'profile' and its .ics calendar profiles are stored. I am also well backed up. I just need to know how get (downloaded/install) what I need and move this over to the new computer. (See screen-shot)
    First, what must I download and install and add on. Second how do I move this data/profile (a virtual Life) over to the new computer without it all coming down like a house of cards?
    Please, unless it's all been documented in one location, a link to this and a link to that probably won't be very helpful. Anyone who can lay-out a simple, easy to follow plan (1, 2, 3, 4, done) will be a hero and make a significant contribution to people who dread buying a new computer for precisely these reasons. It will certainly encourage more people to use this absolutely wonderful personal management system. It's the best I've ever seen and I must keep it. But it's soooo risky.

    Obviously the screen-shot is posted so show what I see now and want to see when I am done. Was the ''really ''not clear? Nothing could be more germane ("Germaine" is a person's name) than a visual that shows the multiple calendar dynamic and what I actually want to see nor better communicants the fact that all I want is that and not other programs in a suite – a question btw asked but not answered.
    "I have no idea what the threshold issue is. Perhaps Zeno can assist you."
    I now see Zeno's reply but I never got an email re: my questions and just noticed it. I will respond to him separately. But ''obviously ''the ''''threshold'''' issue is ISSUE ONE (1) exactly what, and only what, must I download and install.
    Was that really not clear? That was not answered in your first reply. And since you here, later, state that Lightening does not need to be downloaded and installed how on earth could you presume the first threshold issue (question 1) was a joke?
    Once again I wrote and you even quoted me “Please, unless it's ALL been documented in ONE LOCATION, a link to this and a link to that probably won't be very helpful. Anyone who can lay-out a simple, easy to follow plan (1, 2, 3, 4, done) ...”
    As this exchange demonstrates, that statement has certainly proven to be correct.
    You then wrote, “Read the question... the question is. How do I move my profile to a new computer? You are the one making much more of it than that. ... All covered in the link.”
    What “the” (singular) of several links (plural) you provided are you talking abut? But no matter. No it was not "all" covered in the multiple links you sent which do not address the more important threshold question (1). The only issue you addressed was (2) data transfer which is secondary to the threshold issue which is ISSUE question one (1). Threshold issue/question presented: before we can go to question (2) we must address question (1).
    As I stated “Please know that I do know where that 'profile' and its .ics calendar profiles are stored.”
    You replied “No you do not. Local calendars are not stored in ICS files.”
    First please notice you make no attempt to clarify where they are; but, once again, please read carefully. I did not say that the calendars are “stored in the ICS files” but that the .ics files ARE the calendars and I do know there they are stored.
    Please see the screen-shot you claim is not germane? Each of those different colored calendars are (“not stored” in but are) .ics files that are stored in the Profile directory. In the directory c:\Users\myName\AppData\Roaming\Thunderbird is a directory or folder called \Profiles and in that directory is a file named 9w2ydrc4.default. That file IS the profile. In that profile is a folder called “calendar-data” and in that folder are the calender .ics files that are the different colored calenders you see in the very germane screen-shot. So I do know where the calendars are. So I gather that all I copy into the Profiles directory is the file (profile) called 9w2ydrc4.default.
    Again, as I wrote before, I do not want a full suite with email. So what do I downloaded and install to get only what you see in the screen-shot. I don't think we ever installed a full suite before. I never saw and email program. So, again,'' asked but not answered. ''
    Then you stated, I do not need to download and install Lightening (please notice, is a partial answer to my first, threshold, question you presumed was a joke) but only Thunderbird since “Lightning is already in your profile so will move with it When you actually move the profile.”
    Well, since you like to use links, exactly what is your documented Mozilla authority for that hearsay? Since I posted the question I have spent much time at Mozilla. Literally everything I have seen indicates I must first download and Thunderbird. But again, I do not want to and never did, install a full suite of programs and, as far as I know, there is no Mozilla mail or other suite programs on this (my current old computer) and there never were. So, once again'', asked and not answered).''
    Everything at Mozilla states I must first download & install Thunderbird'' and then ''add-on Lightening. Nothing I have seen at Mozilla suggests otherwise or states that when we want to get Mozilla calendars over to a new computer we must first and *only* download and install Thunderbird and then just copy the profile (9w2ydrc4.default) into the 'Profiles' directory. Nothing I have seen at Mozilla suggests I do not have to download and install Lightening. So please show me where Mozilla documents that. ''' Please notice that even Zeno states, “You download AND INSTALL LIGHTNING as an add-on.'''”
    Since this contradicts you do tell me: what is your Mozilla authority for claim that I do not need to download and install Lightening because it's already in my profile? I see nothing in my profile that suggests Lightening is there.
    Finally, I wrote to you “ ...it is axiomatic that Windows transfer wizard does NOT transfer programs but only data. You did not know this?” I only wrote that because it appeared your reference to Transfer Wizard was a response to the threshold question of (1) how to I get the program onto the new machine. But your response is''' very '''strange for you wrote “You are the first person I have encountered that thought it did more than transfer data.” Show me where I said that. I said the opposite. How on earth did you come to that interpretation as a response to my simple and very clear statement that " ...it is axiomatic that Windows transfer wizard does''' NOT transfer programs but only data.'''.”

  • Moving data dynamically

    while moving data from 1 field to another how do make the destination field dynamic

    you can use the Field Symbol..
    Field-Symbol: Traget type any.
    data : Field1 type type1
    data : Field2 type type1
    data : Field3 type type1
    data : Field4 type type1
    data : Field5 type type1
    data : Source type type1 values 'THE VALUE'.
    data: Field type fieldname.
    do 5 times.
    concatinate 'FIELD' sy-index into field.
    assign (field) to <traget>.
    move source to <traget>
    Here the value 'THE VALUE' will be copied to all five data variable. i.e field 1 - 5.
    No Rewards Plz

  • No data found from start index 2 to end index Error

    I'm trying to import a .csv and I keep getting this error  when I press the Next button:
    No data found from start index 2 to end index
    I know it is because my Date column in my csv, because when I take it out I don't get the error
    I have the date in YYYY-MM-DD format so I don'd understand why I am getting this.
    Can someone please advise me? I attached the error and screenshot of my csv
    Thanks,
    Connor

    Ahh, now I understand what you mean.
    It seems that all of your error-processing is done/implemented with DBMS_OUTPUT.PUT_LINE calls, also inside the procedure PROCEDURE_FOR_TRANSFER.
    And it is NOT throwing a NO_DATA_FOUND exception (what most of us here at the forum were thinking).
    It is outputting a message text that has the substring "ORA-01403: no data found" in it: something very different than an exception.
    And you want some solution to hide lines with that substring right?
    What you could do (but this is getting silly if you ask me...) is use DBMS_OUTPUT.GET_LINE(s) right after you invoked PROCEDURE_FOR_TRANSFER and get ("pop") all message lines yourself (that is before SQLPlus is doing that), filter the line(s) with "ORA-01403: no data found" substring in it, and put_line ("push") all remaining message lines back.
    Edited by: Toon Koppelaars on Jul 28, 2009 9:56 AM

  • Moving data from an array to a JTable

    Hi,
    I am facing some problems with handling array data to JTable.
    My objective is moving data from a 10X5 data array to a JTable.
    I am using the following code:
    String columnNames = { "Col 1", "Col 2", "Col 3", "Col 4", "Col 5" };
    for(int j=0; j<10; j++)
    data[ j ] = array1[ j ] + array2[ j ] + array3[ j ] + array4[ j ] + array5[ j ];
    JTable table = new JTable( data, coulmnNames);
    Can you please help me with the error in tha above code.My array1 is a String array and the other four arrays are of Integer data type.
    Thanks!!!

    hi,
    The actual error message is:
    [root@localhost java]# javac GUI.java
    GUI.java:144: incompatible types
    found : int
    required: java.lang.Object
    data [j] [1] = array1[j];
    ^
    GUI.java:145: incompatible types
    found : int
    required: java.lang.Object
    data [j] [2] = array2[j];
    ^
    GUI.java:146: incompatible types
    found : int
    required: java.lang.Object
    data [j] [3] = array3[j];
    ^
    GUI.java:147: incompatible types
    found : int
    required: java.lang.Object
    data [j] [4] = array4[j];
    ^
    4 errors
    My code segment with line no goes as follows:
    private static JTable createTable() {         
           String[] columnNames = { "Col 1", "Col 2", "Col 3", "Col 4"' "Col 5" };
             Object data [ ] [ ] = new Object [10] [5];
       for( int j=0; j<10; j++)
              data [j] [0] = array1[j]; ...................Line:144
              data [j] [1] = array2[j]; ..................Line 145
              data [j] [2] = array3[j]; ....................Line 146
              data [j] [3] = array4[j]; .................Line 147
              data [j] [4] = array5[j];
      final JTable table = new JTable(data, columnNames);
      return table;
    [code/]
    Array 1 is declared as String array and Array 2,3,4,5 are declared as Integer arrays.The error is coming from the Integer arrays only.
    Java version running on my machine is: 1.4.2
    Please help!!!
    Thanks!!!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Moving data from flat file

    Hi,
    I am moving data from flat file to oracle table. while populating the oracle table if I get any errors in flat file those errors should populate in ODI error table.
    Is this is possible? if yes. Could you please let me know the set up in ODI.

    CKM is the dedicated for checking the constraints while doing transformation. The constraints includes, PK,FK, conditions etc.,
    There are two types/ways of checking the constraints.
    Flow Control: Triggered CKM after data is loaded in to I$.
    Static Control : Triggered CKM after data is loaded in to target table.
    If you opt for any one the above ODI will create E$ and SNP_CHECK_TAB (summary table for logging the errors) and load the error records.
    ODI will also provide you an option of loading the corrected error records by RECYCLE ERROR feature. In this phase/run ODI will load ONLY the error records in to target table (assuming its been corrected/cleaned).
    **how to set up flow control could you please provide steps**
    **Appreciate your help**

Maybe you are looking for

  • How to override the default order UOM when using BAPI_PO_CREATE1...

    Hello Experts, How do I force BAPI BAPI_PO_CREATE1 to use my declared purchase order unit of measurement instead of getting it in the material master data? For example, material A has a default purchase UOM of CV, but when I create a PO using materia

  • How to change input with insignia

    Can anyone help me with changing the input on my Insignia tv with my Verizon remote? I am using the Phillips remote. Thanks for any help you can offer.

  • My iphone5 has a yellow screen. Turned it off last night but its still there. What should I do?

    I'm at the beach right now then when I was taking pics, the yellow screen appeared. Turned it off last night, but its still there.

  • Best Practice for Global Constants in IRPT files?

    Hi, For Global constants that need to be accessed inside IRPT files and would only change as the code migrates from Dev->Test-> Production, what is the preferred method of storing and accessing these constants in a reusable way so that we donu2019t h

  • Music video app?

    Hi i plug my iphone into my car everytime i get in it and i get video on all my monitors like if i watch a movie, but my question is can someone make an app that would play continous music videos like mtv on tv but on the iphone. (yea i know its a li