How to check unusable index

Hi all
I am getiing error
Index ORVETL.NU_1_761 or some [sub]partitions of the index have been marked unusable
How to check index which is unusable (Partition , non partition ALL)
Pl help me

I dont know the query it is user running when i got i will update
alert log plz
ORACLE Instance IDEARADB - Can not allocate log, archival required
Sun Jun 20 13:54:03 2010
Thread 1 cannot allocate new log, sequence 44150
All online logs needed archiving
Current log# 2 seq# 44149 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 13:54:08 2010
Thread 1 advanced to log sequence 44150 (LGWR switch)
Current log# 1 seq# 44150 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 13:56:47 2010
Thread 1 advanced to log sequence 44151 (LGWR switch)
Current log# 3 seq# 44151 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:00:34 2010
Thread 1 advanced to log sequence 44152 (LGWR switch)
Current log# 2 seq# 44152 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:06:55 2010
Thread 1 advanced to log sequence 44153 (LGWR switch)
Current log# 1 seq# 44153 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 14:09:31 2010
Thread 1 advanced to log sequence 44154 (LGWR switch)
Current log# 3 seq# 44154 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:12:07 2010
Thread 1 advanced to log sequence 44155 (LGWR switch)
Current log# 2 seq# 44155 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:14:30 2010
Thread 1 advanced to log sequence 44156 (LGWR switch)
Current log# 1 seq# 44156 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 14:17:09 2010
Thread 1 advanced to log sequence 44157 (LGWR switch)
Current log# 3 seq# 44157 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:19:42 2010
Thread 1 advanced to log sequence 44158 (LGWR switch)
Current log# 2 seq# 44158 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:22:19 2010
Thread 1 advanced to log sequence 44159 (LGWR switch)
Current log# 1 seq# 44159 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 14:24:45 2010
Thread 1 advanced to log sequence 44160 (LGWR switch)
Current log# 3 seq# 44160 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:27:15 2010
Thread 1 advanced to log sequence 44161 (LGWR switch)
Current log# 2 seq# 44161 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:29:45 2010
Thread 1 advanced to log sequence 44162 (LGWR switch)
Current log# 1 seq# 44162 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 14:32:21 2010
Thread 1 advanced to log sequence 44163 (LGWR switch)
Current log# 3 seq# 44163 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:34:58 2010
Thread 1 advanced to log sequence 44164 (LGWR switch)
Current log# 2 seq# 44164 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:37:35 2010
Thread 1 advanced to log sequence 44165 (LGWR switch)
Current log# 1 seq# 44165 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 14:40:08 2010
Thread 1 advanced to log sequence 44166 (LGWR switch)
Current log# 3 seq# 44166 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:43:52 2010
Thread 1 advanced to log sequence 44167 (LGWR switch)
Current log# 2 seq# 44167 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:50:46 2010
Thread 1 advanced to log sequence 44168 (LGWR switch)
Current log# 1 seq# 44168 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 14:51:47 2010
Thread 1 advanced to log sequence 44169 (LGWR switch)
Current log# 3 seq# 44169 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 14:53:05 2010
Thread 1 advanced to log sequence 44170 (LGWR switch)
Current log# 2 seq# 44170 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 14:56:59 2010
Thread 1 advanced to log sequence 44171 (LGWR switch)
Current log# 1 seq# 44171 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 15:07:42 2010
Thread 1 advanced to log sequence 44172 (LGWR switch)
Current log# 3 seq# 44172 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 15:17:31 2010
Thread 1 advanced to log sequence 44173 (LGWR switch)
Current log# 2 seq# 44173 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 15:24:32 2010
Thread 1 advanced to log sequence 44174 (LGWR switch)
Current log# 1 seq# 44174 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 15:32:49 2010
Thread 1 advanced to log sequence 44175 (LGWR switch)
Current log# 3 seq# 44175 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 15:41:28 2010
Thread 1 advanced to log sequence 44176 (LGWR switch)
Current log# 2 seq# 44176 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 15:45:28 2010
Index ORVETL.NU_1_761 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 15:45:49 2010
Index ORVETL.NU_2_761 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 15:48:24 2010
Index ORVETL.NU_1_762 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 15:49:03 2010
Index ORVETL.NU_2_762 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 15:51:11 2010
Thread 1 advanced to log sequence 44177 (LGWR switch)
Current log# 1 seq# 44177 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 16:01:10 2010
Thread 1 advanced to log sequence 44178 (LGWR switch)
Current log# 3 seq# 44178 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 16:06:20 2010
Index ORVETL.NU_1_751 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 16:11:04 2010
Thread 1 advanced to log sequence 44179 (LGWR switch)
Current log# 2 seq# 44179 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 16:16:40 2010
Index ORVETL.NU_1_753 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 16:17:54 2010
Index ORVETL.NU_2_753 or some [sub]partitions of the index have been marked unusable
Sun Jun 20 16:20:28 2010
Thread 1 advanced to log sequence 44180 (LGWR switch)
Current log# 1 seq# 44180 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 16:30:11 2010
Thread 1 advanced to log sequence 44181 (LGWR switch)
Current log# 3 seq# 44181 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 16:38:45 2010
Thread 1 advanced to log sequence 44182 (LGWR switch)
Current log# 2 seq# 44182 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 16:40:40 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 54067K exceeds notification threshold (51200K)
Sun Jun 20 16:41:08 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 54016K exceeds notification threshold (51200K)
Details in trace file /oracle/oracle/Oracle_10gr2_DB/admin/IDEARADB/udump/idearadb_ora_1081756.trc
Sun Jun 20 16:41:17 2010
Thread 1 advanced to log sequence 44183 (LGWR switch)
Current log# 1 seq# 44183 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 16:41:32 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 54066K exceeds notification threshold (51200K)
Details in trace file /oracle/oracle/Oracle_10gr2_DB/admin/IDEARADB/udump/idearadb_ora_1081756.trc
Sun Jun 20 16:41:56 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 54015K exceeds notification threshold (51200K)
Details in trace file /oracle/oracle/Oracle_10gr2_DB/admin/IDEARADB/udump/idearadb_ora_1081756.trc
Sun Jun 20 16:49:45 2010
Thread 1 advanced to log sequence 44184 (LGWR switch)
Current log# 3 seq# 44184 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 16:58:01 2010
Thread 1 advanced to log sequence 44185 (LGWR switch)
Current log# 2 seq# 44185 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 17:00:15 2010
Thread 1 advanced to log sequence 44186 (LGWR switch)
Current log# 1 seq# 44186 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 17:02:37 2010
Thread 1 advanced to log sequence 44187 (LGWR switch)
Current log# 3 seq# 44187 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 17:05:13 2010
Thread 1 advanced to log sequence 44188 (LGWR switch)
Current log# 2 seq# 44188 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 17:07:37 2010
Thread 1 advanced to log sequence 44189 (LGWR switch)
Current log# 1 seq# 44189 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 17:13:36 2010
Thread 1 advanced to log sequence 44190 (LGWR switch)
Current log# 3 seq# 44190 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 17:19:16 2010
Thread 1 advanced to log sequence 44191 (LGWR switch)
Current log# 2 seq# 44191 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 17:25:15 2010
Thread 1 advanced to log sequence 44192 (LGWR switch)
Current log# 1 seq# 44192 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 17:32:18 2010
Thread 1 advanced to log sequence 44193 (LGWR switch)
Current log# 3 seq# 44193 mem# 0: +REDO_LOG/redo03.log
Sun Jun 20 17:37:41 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 53686K exceeds notification threshold (51200K)
Sun Jun 20 17:38:02 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 53898K exceeds notification threshold (51200K)
Details in trace file /oracle/oracle/Oracle_10gr2_DB/admin/IDEARADB/udump/idearadb_ora_1241490.trc
Sun Jun 20 17:38:21 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 54025K exceeds notification threshold (51200K)
Details in trace file /oracle/oracle/Oracle_10gr2_DB/admin/IDEARADB/udump/idearadb_ora_1241490.trc
Sun Jun 20 17:38:40 2010
Memory Notification: Library Cache Object loaded into SGA
Heap size 54012K exceeds notification threshold (51200K)
Details in trace file /oracle/oracle/Oracle_10gr2_DB/admin/IDEARADB/udump/idearadb_ora_1241490.trc
Sun Jun 20 17:39:21 2010
Thread 1 advanced to log sequence 44194 (LGWR switch)
Current log# 2 seq# 44194 mem# 0: +REDO_LOG/redo02.log
Sun Jun 20 17:45:28 2010
Thread 1 advanced to log sequence 44195 (LGWR switch)
Current log# 1 seq# 44195 mem# 0: +REDO_LOG/redo01.log
Sun Jun 20 17:51:21 2010

Similar Messages

  • How to find unused indexes in oracle 10g r2

    Hi all,
    db:oracle 10.2.0.3
    os:solaris
    i want rebuilt the some of the indexes (due poor performence of db)
    how to find the unused indexes in oracle 10gr2 database.?
    can any one help me out plz.

    kk001 wrote:
    Hi all,
    db:oracle 10.2.0.3
    os:solaris
    i want rebuilt the some of the indexes (due poor performence of db)
    how to find the unused indexes in oracle 10gr2 database.?
    can any one help me out plz.You can use V$OBJECT_USAGE.
    But how you decide need rebuilding indexes?
    How you decide problem related indexes?
    What is exactly your mean "due poor performence of db"? some queries hang/long running or whole system hang or has poor performance?
    In generally do not need rebuilding index(unless specially cases),first give we above questions`s answers.

  • How to check UNUSED ADMIN-SCRIPTS through SQL Query..??

    Can someone help me with the SQL query, which would give result of total unused ADMIN SCRIPTs for past few months?? (not the routing scripts)
    Thanks in advance for your help n concern..!!

    You can youse the below queries
    Step 1:
    Query to pull the used scripts between 17/12/2011 to 18/01/2012
    select MasterScriptID,EnterpriseName from Master_Script where MasterScriptID in (
    select MasterScriptID from Script where ScriptID in(
    select distinct(ScriptID) from Route_Call_Detail
    where DateTime between '2011-12-17 00:00:00' and '2012-01-18 23:59:59'))
    Step 2:
    Query to pull all the current configured Scripts 
    select MasterScriptID,EnterpriseName from Master_Script
    Step 3:
    Once we get both the data, we can use Excel’s VLOOKUP functionality to find the unused Scripts
    Regards,
    Senthil

  • Check for index existing

    I'm a DBA working with 3 application developers, they want to know how to check for index existing for certain column
    in a table in their schema,
    one friend told me , to check for index existing for certain column you have to query
    the view dba_ind_columns so you have to be a sysdba user
    but I don't want to grant the application developers the sysdba privilage
    but my friend told me , to avoid grant sysdba to the application developers, they can query the view user_indexes in their schema, but after watching the view user_indexes ,
    it gives me a lot of data but doesn't give me the COLUMN NAME
    how can I solve this problem which is :
    Allow the application developers to check for index existing for certain column in their schema without granting them sysdba privilage.

    You can use that query:
    select ai.index_name INDEX_NAME,
    substr(ic.column_name, 1, 30) COLUMN, ic.column_position POSITION,
    ic.table_name TABLE,
    ai.tablespace_name TABLESPACE
    from all_ind_columns ic, all_indexes ai
    where ai.index_name = ic.index_name and ai.table_name = 'TABLE_NAME' and ai.table_owner='OWNER'

  • How to find out unused Indexes (regardless of restarts of server or database)?

    Hello colleagues,
    do you know how to get list of unused indexes regardless of restarts of MS-SQL-Server or Database?
    Because (if I understand correctly to information in
    BOL ) all content of dynamic management view sys.dm_db_index_usage_stats is deleted during restart of MS-SQL-Server or Detach Database.
    Due to when I use my next script (to generate report with list of unused indexes) immediately (or after short time) after restart of MS-SQL-Server it contains misleading/confusing information.
    USE [AdventureWorks]
    GO
    DECLARE @CurrDate as varchar(10), @CurrTime as varchar(5), @DatePartsSeparator as char(1), @EmptyDate as smalldatetime;
    SET @DatePartsSeparator = '/';
    SET @CurrDate = RIGHT ('0' + CONVERT(varchar,DATEPART (day,GETDATE())), 2);
    SET @CurrDate = @CurrDate + @DatePartsSeparator;
    SET @CurrDate = @CurrDate + RIGHT ('0' + CONVERT(varchar,DATEPART (month,GETDATE())), 2);
    SET @CurrDate = @CurrDate + @DatePartsSeparator;
    SET @CurrDate = @CurrDate + CONVERT(varchar,DATEPART (year,GETDATE()));
    SET @CurrTime = RIGHT ('0' + CONVERT(varchar,DATEPART (hour,GETDATE())), 2);
    SET @CurrTime = @CurrTime + ':';
    SET @CurrTime = @CurrTime + RIGHT ('0' + CONVERT(varchar,DATEPART (minute,GETDATE())), 2);
    SET @EmptyDate = CONVERT(smalldatetime, '01' + @DatePartsSeparator + '01' + @DatePartsSeparator + '2000');
    SELECT SERVERPROPERTY('servername') AS [Server]
    , DB_NAME(s.database_id) AS [Database]
    , o.type_desc [ObjectType]
    , o.name AS [Object]
    , i.name AS [Index]
    , (user_seeks + user_scans + user_lookups) AS Reads
    , user_updates AS Writes
    , ((SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) / 100) AS [Rows(1000)]
    , CASE
    WHEN s.user_updates < 1 THEN 100
    ELSE CAST(1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates AS decimal (12,5))
    END AS Reads_per_Write
    , (SELECT TOP 1 luu.[Last_User_Usage] FROM (
    SELECT COALESCE(s.last_user_lookup, @EmptyDate) AS [Last_User_Usage]
    UNION
    SELECT COALESCE(s.last_user_scan, @EmptyDate) AS [Last_User_Usage]
    UNION
    SELECT COALESCE(s.last_user_seek, @EmptyDate) AS [Last_User_Usage]
    ) AS luu ORDER BY luu.[Last_User_Usage] DESC) AS [Last_User_Usage]
    , COALESCE(s.last_user_update, @EmptyDate) AS [Last_User_Update]
    , CASE WHEN i.is_disabled = 0 THEN 'N' ELSE 'Y' END AS [Disabled]
    , i.index_id
    , o.create_date AS [Index_Created]
    , o.modify_date AS [Index_Modified]
    , u.name AS [Index_Owner]
    , @CurrDate AS [This_Report_Created]
    , @CurrTime AS [This_Report_Created_Time]
    , CONVERT(CHAR(40), SUSER_SNAME()) AS [This_Report_Created_by_User]
    , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(DB_NAME(s.database_id)) + '.' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) as 'drop statement'
    FROM sys.dm_db_index_usage_stats s
    INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
    INNER JOIN sys.objects o ON s.object_id = o.object_id
    INNER JOIN sys.schemas c ON o.schema_id = c.schema_id
    LEFT OUTER JOIN sys.database_principals u ON OBJECTPROPERTY ( o.object_id , 'OwnerId' ) = u.principal_id
    WHERE OBJECTPROPERTY(s.object_id,'IsUserTable') = 1
    AND s.database_id = DB_ID()
    AND i.type_desc = 'nonclustered'
    AND i.is_primary_key = 0
    AND i.is_unique_constraint = 0
    AND o.type IN ('U','V')
    AND o.is_ms_shipped = 0
    AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
    ORDER BY Reads;
    GO
    __________________________________________________________ If isn't above described anything, the following applies: Technical details: * OS: Windows Server v2008-R2, English, Enterprise Edition, x64, SP1 ** My User-Account is member of 'Administrators'
    local security group. * MS-SQL-Server: v2008-R2, English, Enterprise Edition, x64, SP1 ** My User-Account is member of 'SysAdmin' db-role.

    Hi,
    You are right. The sys.dm_db_index_usage_stats DMV tells how often and to what extent indexes are used.
    Please check the unused section in the below article to find the unused indexes:
    Uncover Hidden Data to Optimize Application Performance
    http://msdn.microsoft.com/en-us/magazine/cc135978.aspx#S5
    In addition, you can also use Database Engine Tuning Advisor to verify if you have too many indexes. DTA can easily find the bad indexes and provide the recommendation for action.
    Reference:
    http://blogs.technet.com/b/sql_server_isv/archive/2011/04/08/fundamentals-running-database-engine-tuning-advisor-and-selecting-indexes.aspx
    Hope it helps.
    Tracy Cai
    TechNet Community Support

  • How to check small table scan full table scan if we  will use index  in where clause.

    How to check small table scan full table scan if i will use index column in where clause.
    Is there example link there i can  test small table scan full table  if index is used in where clause.

    Use explain plan on your statement or set autotrace traceonly in your SQL*Plus session followed by the SQL you are testing.
    For example
    SQL> set autotrace traceonly
    SQL> select *
      2  from XXX
      3  where id='fga';
    no rows selected
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=13 Card=1 Bytes=16
              5)
       1    0   PARTITION RANGE (ALL) (Cost=13 Card=1 Bytes=165)
       2    1     TABLE ACCESS (FULL) OF 'XXX' (TABLE) (Cost=13 Card
              =1 Bytes=165)
    Statistics
              1  recursive calls
              0  db block gets
           1561  consistent gets
            540  physical reads
              0  redo size
           1864  bytes sent via SQL*Net to client
            333  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed

  • How to check the unused users in portal.

    Hi Guru,
    We are doing auditing in Portal server.Can any tell us
    How to check the unused users in portal?.
    Regards,
    Vivek

    Use portal activity reporting  for monitoring the users. This way you can monitor the users in the portal who logged on to the portal.
    Below are things you can monitor from portal activity report iview
    1) number of users logged on during the period of time.
    2) Details of the users who logged on
    3) monitoring particular iview/page
    Check the below threads for more help
    http://help.sap.com/saphelp_nw04s/helpdata/en/47/87329cc84a199ce10000000a42189d/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/47/87346dc84a199ce10000000a42189d/frameset.htm
    Raghu

  • How to Check whether a table is indexed or not?

    Hi all,
    I am writing a c++ program where i have to create an Index if Index not exists.
    Oracle 10.2 is the oracle version and i am using oracle text for indexing.
    table -- create table xmltable (versionnumber number,instance xmltype);
    index -- create index configindex on xmltable (instance) indextype is ctxsys.context;
    drop -- drop index configindex force;
    My algorithm should something like,
    if(configindex is exists)---------->What i have to write here to check if index is present.
    { drop index  configindex force; }
    create index configindex on xmltable (instance) indextype is ctxsys.context;
    Thanks......

    Hi,
    You could check the system view ALL_INDEXES .
    HTH,
    Chris

  • How to know if Index is used by a program or job

    Hi,
    I am having this problem on how to check if a specific index such as /BIC/B0000** is used by a job or program. This is because I am woking on the Oracle Cost-Based Optimizer to be updated in a weekly schedule. But before a change in the schedule is done for the optimizer and checking through index, there was an index in an UNUSABLE STATE.
    SQL> select INDEX_NAME,INDEX_TYPE,STATUS from dba_indexes where index_name like '/BIC/B0000881001KE';
    INDEX_NAME                     INDEX_TYPE                  STATUS
    /BIC/B0000881001KE             NORMAL                      UNUSABLE
    So I cannot proceed because I need to know if this index is used by a program. Can you help me on this?

    Hi
    Take a look at SAP Note 184905 - Collective note on performance Several notes related to indexing can be found in the same

  • CIN: How to check the material document posted without excise invoice

    Hi Guru,
    Please advise how to check the material document posted without excise invoice.
    I have tried tcode J1I7 but it seems start to collect the excise invoice first and then material document.
    But my case is to find the material document WITHOUT excise invoice for internal tracking purpose.
    At the moment we start from tcode MB51 to get the list of material document and check in J_1IEXCHDR / J_1IEXCDTL
    Best regards,
    Pakorn

    Hi,
    Try creating a Query in Tcode SQVI by combining tables MKPF and J_1IEXCHDR/J_1IEXCDTL for your requirement.
    Check these threads how to create Query.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6018c1ae-8c44-2d10-6ea9-c3fad2c82880?QuickLink=index&…
    http://ptgmedia.pearsoncmg.com/images/9780672329029/samplechapter/0672329026_CH03.pdf
    Regards
    Binoy

  • How to check the verity version in our PeopleSoft Installation?

    How to check the verity version in our PeopleSoft Installation? I am not sure if the verity is installed or not and also if installed what is the version?

    yes. it says the version is 5.0.1
    Is there any difference in installation or configuration when the app and web server are in same machine and when the app and web server are installed in different servers?
    ============================================
    D:\fs840\webserv\peoplesoft>mkvdk
    mkvdk - Verity, Inc. Version 5.0.1 (_nti40, Jul 23 2004)
    Usage: mkvdk [<option>...] <filespec>...
    Where <option> can be a VDK switch, or any of:
    -about Show the collection's about resources
    -autodel Delete bulk insert file when no longer needed
    -backup <dir> Specify collection backup location
    -bulk Submit bulk insert file(s)
    -charmap <name> Specify the character map to VDK
    -collection <path> Specify the collection (required)
    -create Create the collection
    -credentials <user> Specify user[:passwd][:domain][:mailbox]
    -datapath <path> Specify VDK datapath
    -datefmt <fmt> Specify date format to VDK
    -debug Enable debugging output
    -delete Delete documents
    -description <desc> Set the collection's description
    -diskcache <num> Set VDK's disk cache size (kbytes)
    -extract Extract field values from text
    -help Print this usage information
    -insert Insert documents (default)
    -locale <locale> Specify the locale to VDK
    -logfile <file> Save output in a log file
    -loglevel <num> Set the VDK output level for the log
    -mailboxes This option is depracated. Use the credentials option inste
    ad
    -maxfiles <num> Set VDK's maximum number of open files
    -maxmemory <num> Set VDK's maximum memory usage (kbytes)
    -mode <mode> Set the indexing mode
    -modify Modify fields using field/value pairs from a bulkfile
    -nohousekeep Disable housekeeping
    -noindex Disable indexing
    -nolock Turns off locking (dangerous)
    -nooptimize Disable optimizations
    -nosave Don't save collection work list
    -noservice Prevents servicing of submitted work
    -nosubmit Don't submit work to VDK
    -numdocs <num> Number of documents to insert from bulk insert file(s)
    -numpages <num> Synonym for diskcache for backward compatibility
    -offset <num> Specify offset into bulk insert file(s)
    -online Flag for online Bulk Modify
    -optimize <spec> Optimize the collection
    -outlevel <num> Set the VDK output level
    -persist Service the collection forever
    -purge Remove all documents from collection
    -purgeback Purge in the background
    -purgewait <secs> Specify delay before purge
    -quiet Suppress all non-error messages
    -repair Repair the collection
    -servlev <spec> Advanced option for overriding service level
    -sleeptime <secs> Interval between service calls for persist
    -style <dir> Specify style directory for create
    -submit Synonym for noservice for backward compatibility
    -synch Perform work synchronously
    -topicset <path> Specify VDK topic set
    -update Update documents
    -vdkhome <path> Specify VDK home
    -verbose Output more information
    -words Build word assist list
    -wordindex Build word assist index
    The <spec> for -optimize is a hyphenated string of:
    maxmerge Perform maximal merging of partitions
    squeeze Recover space from deleted documents
    vdbopt Build optimized VDB's
    spanword Create word list spanning all partitions
    ngramindex Create ngram index into spanning word list
    maxclean Really clean (not for read-write)
    readonly Make the collection read-only
    tuneup Fully optimize for read-write use
    publish Fully optimize for read-only use
    The <spec> for -servlev is a hyphenated string of:
    search Enable search and retrieval
    insert Enable adding and updating documents
    optimize Enable opportunistic collection optimization
    assist Enable building of word list
    housekeep Enable housekeeping of unneeded files
    delete Enable document deletion
    backup Enable backup
    purge Enable background purging
    repair Enable collection repair
    dataprep Same as search-index-optimize-assist-housekeep
    index Same as insert-delete
    Error: must specify collection
    mkvdk done
    D:\fs840\webserv\peoplesoft>

  • HOW TO Delete Unused Media from FINAL CUT PRO

    *HOW TO DELETE UNUSED MEDIA FROM HARD DRIVE IN FINAL CUT PRO.* Keywords: disk disc drive space remove compressor
    SUMMARY:
    Say your original clip is 10 gigs, you can use only what you need and delete the rest from your hard drive. I know people on the forums say to just buy another hard drive, NO! It took 2 days to figure out but here it is!!! Email me if needed at [email protected]
    QUESTION:
    I captured 1 hour of video at a time and now want to delete from my hard drive all the parts I don’t want to use. I have 300GB of junk on my hard drive and I only want to use a few scenes using a few megabytes, so that I'll get most of my 300GB back!!!!
    ANSWER:
    In final cut on Mac, click on one small clip (to get familiar with process), go to “file”, “export”, then “using compressor”. This will open compressor software. Your file will open in the left top window. Then you need to batch the job or jobs. IMPORTANT: I’ve also solved the Compressor problem when you press “submit” to a batch and it states “Cluster: None”. Search forum for “In compressor you experience what I have”.
    <Edited by Moderator>

    Part of the problem could lie in the fact it's an imovie project coming into FCP, wouldn't rule that out. Second my be the terms you use and what you expect to happen. For example, a subclip is just a smaller clip subbed from a larger clip. It has nothing to do with media management. But you can media manage a subclip and delete the unused media.
    Try this (be sure to back up the original clip first). take a long clip into the viewer, make an IN and OUT duration of like 30 frames and then subclip that, Modify > Make Subclip. a new subclip will appear in the browser with a torn clip icon and the name Subclip. Right click on the subclip and choose Media Manager.... Check the Delete unused media from duplicated clip. As you toggle that the green Modified bar at the top change from the full clip size to a tiny modified size. If it does not then one of two things ... user error or the system is screwed up. That's the easiest way to test media manager.

  • How to check with table for cursor..?

    How to check with table for cursor..?
    Here I have Table temp_final_plan
    Here i want to update if already exit...below is the procedure....
    CREATE OR REPLACE PROCEDURE spu_final_profit_plan
    AS
    -- Constant declarations
      ln_errnum number := 0;
    -- Variable declarations
       ls_errmsg app_errors.err_msg%TYPE;
       ls_appmsg app_errors.app_msg%TYPE;
       ls_appid  app_errors.app_id%TYPE;
    -- Cursor declaration for final_update_el
    CURSOR cur_final_update_el IS
        select '910' ent,
               '9127316' center,
               post_acct,
               sum(avg_mtd_01) sum_avg_mtd_01,
               sum(avg_mtd_02) sum_avg_mtd_02,
               sum(avg_ytd_01) sum_avg_ytd_01,
               sum(avg_ytd_02) sum_avg_ytd_02
          from mon_act_cypy
         where rec_type = 'A'
           and sum_flag = 'D'
           and yr = '2008'
           and substr(ctr_or_hier, 1, 2) = 'el'
           and ent || sub_ent in
               (select ent || sub_ent
                  from ent_ref
                 where roll_ent || roll_sub_ent = '999100')
         group by post_acct
        having sum(avg_mtd_01) <> 0
            or sum(avg_mtd_02) <> 0
            or sum(avg_ytd_01) <> 0
            or sum(avg_ytd_02) <> 0;
    -- Cursor declaration for final_update
    CURSOR cur_final_update IS
        select b.plan_ent b_plan_ent,
               b.plan_ctr b_plan_ctr,
               a.post_acct a_post_acct,
               sum(a.avg_mtd_01) sum_avg_mtd_01,
               sum(a.avg_mtd_02) sum_avg_mtd_02,
               sum(a.avg_ytd_01) sum_ytd_mtd_01,
               sum(a.avg_ytd_02) sum_ytd_mtd_02
          from mon_act_cypy a,
               plan_unit_tbl b
         where a.ent || a.ctr_or_hier = b.ent || b.ctr_or_hier
           and a.rec_type = 'A'
           and a.sum_flag = 'D'
           and a.yr = '2008'
           and b.hier_tbl_num = '001'
           and a.ent || a.sub_ent in
               (select ent || sub_ent
                  from ent_ref
                 where roll_ent || roll_sub_ent = '999100')
         group by b.plan_ent, b.plan_ctr, a.post_acct
        having sum(a.avg_mtd_01) <> 0
            or sum(a.avg_mtd_02) <> 0
            or sum(a.avg_ytd_01) <> 0
            or sum(a.avg_ytd_02) <> 0;
    -- Begin the procedure body
       BEGIN
    -- Insert / Update final profit plan for final_update query using cursor
       FOR rec_final_update_el IN cur_final_update_el
       LOOP
       EXIT WHEN rec_final_update_el%NOTFOUND;
       IF rec_final_update_el. THEN
          UPDATE temp_final_plan
             SET sum_avg_mtd_01 = rec_final_update_el.sum_avg_mtd_01,
                 sum_avg_mtd_02 = rec_final_update_el.sum_avg_mtd_02,       
                 sum_avg_ytd_01 = rec_final_update_el.sum_avg_ytd_01,       
                 sum_avg_ytd_02 = rec_final_update_el.sum_avg_ytd_02,       
           WHERE ent = rec_final_update_el.ent
             AND center = rec_final_update_el.center
             AND post_acct = rec_final_update_el.post_acct;
       ELSE
          INSERT INTO temp_final_plan VALUES(rec_final_update_el.ent,
                                             rec_final_update_el.center,
                                             rec_final_update_el.post_acct,
                                             rec_final_update_el.sum_avg_mtd_01,
                                             rec_final_update_el.sum_avg_mtd_02,
                                             rec_final_update_el.sum_avg_ytd_01,
                                             rec_final_update_el.sum_avg_ytd_02);
       END IF;
       END LOOP;
    -- Insert / Update final profit plan for final_update query using cursor
       FOR rec_final_update IN cur_final_update
       LOOP
       EXIT WHEN rec_final_update%NOTFOUND;
       IF rec_final_update. THEN
          UPDATE temp_final_plan
             SET sum_avg_mtd_01 = rec_final_update.sum_avg_mtd_01,
                 sum_avg_mtd_02 = rec_final_update.sum_avg_mtd_02,       
                 sum_avg_ytd_01 = rec_final_update.sum_avg_ytd_01,       
                 sum_avg_ytd_02 = rec_final_update.sum_avg_ytd_02,       
           WHERE ent = rec_final_update.b_plan_ent
             AND center = rec_final_update.b_plan_ctr
             AND post_acct = rec_final_update.a_post_acct;
       ELSE
          INSERT INTO temp_final_plan VALUES(rec_final_update.b_plan_ent,
                                             rec_final_update.b_plan_ctr,
                                             rec_final_update.a_post_acct,
                                             rec_final_update.sum_avg_mtd_01,
                                             rec_final_update.sum_avg_mtd_02,
                                             rec_final_update.sum_avg_ytd_01,
                                             rec_final_update.sum_avg_ytd_02);
       END IF;
       END LOOP;
    -- EXCEPTION handling section
       EXCEPTION
    -- Fire OTHERS Exception case by default
       WHEN OTHERS THEN
    -- ROLL BACK Transaction, if any failure
       ROLLBACK;
       ln_errnum := SQLCODE;
       ls_errmsg := SUBSTR(SQLERRM, 1, 100);
    -- Log the ERRORS into APP_ERRORS table using SPU_LOG_ERRORS procedure
       spu_log_errors(ln_errnum, ls_errmsg, ls_appid, ls_appmsg);
    -- End of the stored procedure
    END spu_final_profit_plan;
    [\pre]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I'm not sure what you mean by, 'How to check with table for cursor..?' but I'll offer a comment on your Code Snippet. I think you want to know how to check if a record exists so you know if you need to perform an INSERT or an UPDATE.
    Here is a snippet of your code. I'll put my comments in "Comment" style in your code.
    -- Insert / Update final profit plan for final_update query using cursor
       FOR rec_final_update_el IN cur_final_update_el
       LOOP
    /* There is no need to test for %NOTFOUND since you are using Cursor FOR Loop! 
    ** This construct automatically exits when the last record is processed. */
       EXIT WHEN rec_final_update_el%NOTFOUND;
    /* Is this where you would like to know how to Check if the record already exist??
    ** I asked this because, 'rec_final_update_el.' is not valid syntax.  Are you looking for
    ** an Cursor Attribute or Method you can check here? 
    ** I would suggest a Primary Key or Unique Index on ENT, CENTER, and POST_ACCT
    ** on the TEMP_FINAL_PLAN table. Then simply perform an INSERT and code an
    ** Exception to UPDATE when you get a DUP_VAL_ON_INDEX exception.  Otherwise,
    ** you will need to simply run an Implicit or Explicit Cursor to test if the row exists and
    ** use this return value to determine if you should INSERT or UPDATE.  */
       IF rec_final_update_el. THEN
          UPDATE temp_final_plan
             SET sum_avg_mtd_01 = rec_final_update_el.sum_avg_mtd_01,
                 sum_avg_mtd_02 = rec_final_update_el.sum_avg_mtd_02,       
                 sum_avg_ytd_01 = rec_final_update_el.sum_avg_ytd_01,       
                 sum_avg_ytd_02 = rec_final_update_el.sum_avg_ytd_02,       
           WHERE ent = rec_final_update_el.ent
             AND center = rec_final_update_el.center
             AND post_acct = rec_final_update_el.post_acct;
       ELSE
          INSERT INTO temp_final_plan VALUES(rec_final_update_el.ent,
                                             rec_final_update_el.center,
                                             rec_final_update_el.post_acct,
                                             rec_final_update_el.sum_avg_mtd_01,
                                             rec_final_update_el.sum_avg_mtd_02,
                                             rec_final_update_el.sum_avg_ytd_01,
                                             rec_final_update_el.sum_avg_ytd_02);
       END IF;
       END LOOP;I hope I've answered your question, but if I haven't please provide more details so we can better understand your request.
    Craig...

  • How to maintain bitmap index on a large table in DW?

    Hi all,
    We have many tables which are constantly doing either FULL or INCREMENTAL loading.
    And we have created many BITMAP indexes and several B*Tree index (caused by PRIMARY KEY or UNIQUE key constraints) on those tables.
    So, what I want to know is, how to maintain those BITMAP (and B*Tree) indexes for different loading mode?
    like, should I drop the index before the full load and re-create it after that?
    and do nothing in INCREMENTAL loading? I am aware that it will take more time to load with indexes.
    any links, books, articles or opinions would be highly appreciated.
    Thanks

    Just to reiterate, add to what Adam said. From Oracle Doc
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/indexes002.htm#CIHJIDJG
    Unusable indexes
    An unusable index is ignored by the optimizer and is not maintained by DML. One reason to make an index unusable is to improve bulk load performance. (Bulk loads go more quickly if the database does not need to maintain indexes when inserting rows.) Instead of dropping the index and later re-creating it, which requires you to recall the exact parameters of the CREATE INDEX statement, you can make the index unusable, and then rebuild it.
    You can create an index in the unusable state, or you can mark an existing index or index partition unusable. In some cases the database may mark an index unusable, such as when a failure occurs while building the index. When one partition of a partitioned index is marked unusable, the other partitions of the index remain valid.
    An unusable index or index partition must be rebuilt, or dropped and re-created, before it can be used. Truncating a table makes an unusable index valid.
    Beginning with Oracle Database 11g Release 2, when you make an existing index unusable, its index segment is dropped.
    The functionality of unusable indexes depends on the setting of the SKIP_UNUSABLE_INDEXES initialization parameter. When SKIP_UNUSABLE_INDEXES is TRUE (the default), then:
    •DML statements against the table proceed, but unusable indexes are not maintained.
    •DML statements terminate with an error if there are any unusable indexes that are used to enforce the UNIQUE constraint.
    •For nonpartitioned indexes, the optimizer does not consider any unusable indexes when creating an access plan for SELECT statements. The only exception is when an index is explicitly specified with the INDEX() hint.
    •For a partitioned index where one or more of the partitions are unusable, the optimizer does not consider the index if it cannot determine at query compilation time if any of the index partitions can be pruned. This is true for both partitioned and nonpartitioned tables. The only exception is when an index is explicitly specified with the INDEX() hint.
    When SKIP_UNUSABLE_INDEXES is FALSE, then:
    •If any unusable indexes or index partitions are present, any DML statements that would cause those indexes or index partitions to be updated are terminated with an error.
    •For SELECT statements, if an unusable index or unusable index partition is present but the optimizer does not choose to use it for the access plan, the statement proceeds. However, if the optimizer does choose to use the unusable index or unusable index partition, the statement terminates with an error.
    Incremental load really matters the volume and whether for new dats you just add new partitions or subpartitions . If my incremntal go all over place and/or if I am touching few thousand rows. Yes might want to keep the indexes valid and let Oracle maintain it. IF millions added or incremental data just added to new part/subpart . Keeping indexes unsable for those partitions/subpartitions and the rebuilding it later may yield better results.

  • How to tell which Indexes are not being used?

    We are a large development shop and have many customers. Our database design is very generic so that it works for all of our customers. Each night we use an SSIS ETL process to bring down large amounts of data from the iSeries into SQL. One
    particularily large customer takes a very long time and we are looking for ways to speed up thier data import and transformation. I would like to see which indexes he does not use and possibly remove them. Each night we fully repopulate hundreds of staging
    and ods tables and incrementally delete and repopulate the days work for a handful of history type tables. Removing some indexes off of the large tables could make a big impact. 
    How can i tell which indexes the customer does not use?

    > IDENTIFYING UNUSED INDEXES IN A SQL SERVER DATABASE 
       Just because an index is not being used does not necessarily mean it should be removed.
    > Index This: All About SQL Server Indexes
    sp_BlitzIndex
    José Diz     Belo Horizonte, MG - Brasil

Maybe you are looking for

  • Need Help in Exporting using DBMS_DATAPUMP

    Hi, I need a help in building the Procedure where I will use DBMS_DATAPUMP for Export purpose. I have a scenario where there is TABLEA and TABLE B. TABLE A structure: CODE VARCHAR2(10),NAME VARHAR2(50) TABLE B structure: DEPT VARCHAR2(10),CODE VARCHA

  • Petition for MP3

    As MP3pro will greatly increase performance for all Creative mp3 players, I propose to petition Creative to support the MP3pro codec. MP3pro is a much more efficient format to encode your music into and is half the size of a normal mp3 file. The foll

  • Itunes and windows xp not finding my ipod nano..anyone know how to fix it?

    I got my ipod nano for xmas, and was really pleased with it. But 3 days on, the computer couldnt find it in my computer OR my devices. Then itunes failed to find it. Im now totally stuck, i have tried changing USB ports and basics like that.. but it

  • How to let sql server 2008 know the table created at front end in c#

    How to let sql server 2008 know the table created at front end in c#

  • IDE Showing PIO and Not DMA

    Been having a few problems with my HD, I've re-installed Windows, and the latest Hyperion drivers, but the HD is still not performing. Had a look at the Primary IDE Channal in the device manager, on the Advanced Settings Tab and its set to Device 0 T