Question about "ANALYZE TABLE VALIDATE STRUCTURE" in 9i

Hi,
I have a 9.2.0.8 database on AIX. After a storage copy of my produztion datafiles from a site to another I performed a DBV to ensure that all copies were correct. After I correctly started my db.
Now, could be a good idea to perform a "ANALYZE TABLE tabname VALIDATE STRUCTURE;" statement on all table of my database to be sure that I haven't a corruption? Are there cons performing this operation?

Insaponata wrote:
Hi,
I have a 9.2.0.8 database on AIX. After a storage copy of my produztion datafiles from a site to another I performed a DBV to ensure that all copies were correct. After I correctly started my db.
Now, could be a good idea to perform a "ANALYZE TABLE tabname VALIDATE STRUCTURE;" statement on all table of my database to be sure that I haven't a corruption? Are there cons performing this operation?IMO its needless.
Just take full backup through RMAN, and validate that backup.
regards

Similar Messages

  • Question about Analyze Table

    Hi Guys,
    I am creating an Oracle Job that performs two tasks:
    1.) Delete record from a Table A older than 30 days.
    2.) Perform Analyze Table to the Table A after deleting records.
    The thing that makes me think is that aside from the job I'm creating, there are other jobs/processes that accesses Table A.
    Will there be any problem that may occur once my job performs the Analyze Table to Table A and other jobs/process are also accessing Table A concurrently (e.g. inserting, querying)?
    I'll surely appreciate any idea from you guys!
    Thanks,
    Jay

    Other than the IO load that it causes, the ANALYZE statement has no effect on other sessions while it is running+.
    However, on completion, it will invalidate all SQLs referencing that table. All new invocations of SQLs will need re-parsing and may generate a new Execution Plan, different from those for the same statements before the ANALYZE. Rrunning SQLs already begun will continue running until they complete.
    If you have a job / session / application that frequently queries the table, you might suddenly see a change in execution plan for those queries after the ANALYZE completes.

  • Question about Update Tables

    Hello Gurus,
    I have a question about "update table" entries. I read somewhere that an entry in update table  is done at the time of the OLTP transaction. Is this correct? If so, does this happen on a V1 update or V2 update? Please clarify. Similarly, an entry in the "extraction queue" (incase you are using "queued delta" will happen on V1 update. I just want to get a clarification on both these methods. Any help in this matter is highly appreciated.
    Thanks,
    Sreekanth

    Hi
    update tables are temporary table that are refreshed by v3 job.update table is like buffer, it gets filled after updation of application table and statistical table through v1 v2 update .  we can bypass this process by delta methods.
    M Kalpana

  • Conversion Table Questions: problems converting tables to structured

    I'm in the process of converting unstructed FM8 docs to structured docs using FM10 and DITA structure. Right now I'm trying to build a conversion table to structure the docs, but I'm having a lot of trouble with the tables. The main problem seems to be with the tgroup element since I can't figure out how to get the thead and tbody to wrap in the tgroup element.
    Here is an example of one of the tables I'm trying to structure:
    This is the Table section of the conversion table I've had the most success with:
    And this is the result:
    As you can see, the thead and tbody elements are not wrapped in tgroup and the structure is invalid. I've been playing around with the conversion table for several days now and can't seem to figure out how to fix this. If anyone has any suggestions, they would be greatly appreciated!
    Thanks,
    D'Arcy

    D'Arcy,
      Here are a few observations that might help:
    1) The structure you use within FM (which is what the conversion table helps create) need not be exactly the same as the structure you use in XML.
    2) Do not confuse the element named table with a FrameMaker table.
    3) As Michael has pointed out, the CALS table model used by DITA is a five-level structure:
    table
      tgroup
        tbody
           row
             entry
    while the FrameMaker table model has 4 levels:
    table
      tbody
        row
          cell
    4) FrameMaker can map between the 5-level structure and the 4-level structure in one of two ways, controlled by the element definitions (usually specified in an EDD), the read/write rules, and the DTD:
    a) The element named table can be a container while the element named tgroup is the actual table. This model has a closer correspondence between the FrameMaker and the DITA table structure. Michael has pointed out one of its biggest weaknesses. If a FrameMaker table (regardless of its element name) has a title, then FrameMaker repeats the title on each page of a multipage table. However, if the element named table is a container and the actual table is an element named tgroup, the title is a container preceding the table and this automatic repetition does not happen.
    b) If you are sure that your tables will always consist of a single tgroup, you don't have to use the tgroup element within FrameMaker. Make the element named table a FrameMaker table and include the read/write rule:
    element "tgroup" unwrap;
    FrameMaker will recreate the tgroup element whenever you save a document as XML. This model allows you to make the table title a FrameMaker table. However, you cannot use the element named title as both a FrameMaker table title and a container. Typically, therefore, people use the element named title as a container that is the title of things like chapters and sections and use an element named something like tabletitle for the title of tables.
    Read/write rules like:
    element "title" is fm element:
    element "title" is fm table title element "tabletitle"'
    let you save your documents as XML with both title and tabletitle represented as <title> in XML. You'll need to use a custom client or XSLT to map the XML element named title to either title or tabletitle depending on whether the XML element occurs within a table or not.
    5) Your conversion table had one row with T: in column 1 and another with T:Format A in column 1; one mapped to table and the other to tgroup. The row with T:Format A would apply to all tables with the specified format, the other row would apply to all tables with other formats. These conversion table rows do not define a relationship between table and tgroup. Map T: to table if you want the model in 4b) above and to tgroup if you want the model in 4a). In the latter case, use an additional row with:
    title?, tgroup
    in column 1 and  table in column 2.
         --Lynne

  • Question about Sender and Receiver Structure for JDBC

    Dear All,
    I want to know why there is a fixed format for sender and receiver structure for JDBC. why cant we use the structure like what we want? explain me in detail.
    Thanks

    Good Question:
    We have to create our data structure based on the existing database table structure. While reading or writing , JDBC adapter convert our data type structure in to SQL Query Statements that matches Table structure.

  • SSMA (Access - SQL 2012) newbie question about new tables

    I have successfully run SSMA and moved the back end tables of an Access project to SQL Express 2012.
    Some questions:
    - if I add a new table to the SQL backend, how do I get it to appear in the Access project?
    - if I want to move the whole thing to another computer, how do I go about it?
    - do I always have to have the SSMA app installed?
    - I can't see an ODBC datasource on the machine
    Thanks for some help
    CarolChi

    Thanks, I was not clear in my questions:
    1. Given that from now on the tables should all be in SQL, I think it would be better to create new tables in SQL them there rather than create them in Access and have to keep moving them to SQL with SSMA. So if I create a new table in SQL how can I
    see it in Access? Or can I easily create a linked table in Access?
    The table created by SSMA have this property:
    If I create a new table in SQL and link it I get a more classic ODBC connection:
    2. I am happy with SQL backup/restore, attach/detach. But how does Access know where the SQL Database has gone?
    3. I thought the migration was a one time task, but from what you say it is an ongoing "linking" app. So I would be better to export my tables and re-link them to get something that can be moved to another network or computer?
    4. The SSMA looks very much like an ODBC connection.
    This is a new project so I will have to make lots of new tables and also move the whole thing around to different computers, on different networks.
    CarolChi

  • Question about Unique Table Columns - How to Handle Alerts

    I have a table with a unique column, and I want to provide an error alert when a user tries to insert a record in which there is another record that already has the same value for that column.
    I tried just relying on the table constraint in the DB, but I just get a "cannot insert record" message on the form run-time.
    I put a query in the PRE-INSERT trigger to query and see if that value exists in another record and display the alert there, but my concern is that a record could be inserted by another user after my check query runs. In that case, I will still just get the "cannot insert record" message.
    What is a good way that I can give an error alert AND assure that another record is not being inserted in the mean time?
    Thanks,
    Kurz

    There is NO way you can absolutely trap all situations by checking in the when-validate triggers.
    Your user could even try to insert the same duplicate value in two rows at the same time.
    Or another user could commit at the same time as the first.
    The ONLY way to handle those errors is to trap for the error in the On-Error trigger, which fires
    when Forms returns the FRM-40508 or FRM-40509 code. Here is the standard code we have settled on here:
    -- On-Error form-level trigger                                       --*
    DECLARE
      Err_Code NUMBER(5)     := ERROR_CODE;
      Msg      VARCHAR2(150)
              := SUBSTR('   '||ERROR_TYPE||'-'||TO_CHAR(ERR_Code)||': '
                                         ||ERROR_TEXT,1,150);
      DBMS_TXT VARCHAR2(200);
    BEGIN
      -- 40508=Unable to insert rec.  40509=Unable to update rec.
      IF Err_Code IN(40508,40509) THEN
        DBMS_TXT := SUBSTR(DBMS_ERROR_TEXT,1,200);
        IF  DBMS_TXT like '%unique constraint%'
        AND DBMS_TXT like '%I_PK_ASGTABLE%'  THEN
          U73_ERR_TIMER('ASG.RESIDENCE_CODE');
          Message('   5011  DUPLICATE RESIDENCE CODE FOUND');
          Raise Form_trigger_failure;
        END IF;
      END IF;
      -- This handles all other on-error messages:
      Message(Msg);
      Raise Form_trigger_failure;
    END;U73_ERR_TIMER just assures that focus gets placed on the correct item, and sets its color red.

  • Question about multiple tables and formulas in numbers

    I  was wondering about having a formula that effects one table based on data in the other table. I have a spreedsheet that i created for work. In column A is a list of addresses. In column B-ZZ is a list of "billing codes" the get applied to the service work done for that address.Its kind of difficult to look at this large spreadsheet on my iphone and edit it on the go. Is there a way to, Have another column, possibly on sheet 2 or something, where i type in these "billing codes" and they would fill an x or some other character in cell that corrisponds with that billing code on that address line? So to be as plain as possible, I want to enter data in one cell (another sheet if possible) and have it "mark" another cell in response to the specific data on that "entry cell." Im thinking this way because instead of scrolling back and fourth on the sheet on my iphone, to "mark" the boxes, i could just navigate to sheet 2 enter the data, or "billing codes" and they would "X" the cell that would match up with the code column and address row. each address row would have a seperate "entry field" in sheet 2, to make the formula easier. thanks for any help

    Tom,
    That's a lot of columns for jsut a table of marks that reflect an input. Sure, you could have each cell in that giant table check to see if some corresponding cell contains certain text, but that document is going to be slower than you might be able to tolerate.
    Jerry

  • Question about selection table analysis

    Dear All,
    My last post was closed by the moderator so I would like to ask a different question. Please apologize if my question is trivial.
    1. There is a selection table seltab with one fields of type I, with the following contents:
    I     GT     3     0
    I     NE     8     0
    The literature eg. ABAP Objects says that the control statement like IF or CASE can be used with logical expression IN.
    Now I am checking if the variable test fulfills the conditions defined by the selection table.
    DO 10 TIMES.
      IF test IN seltab.
        write test.
      ENDIF
      test = test + 1.
    ENDDO.
    As a result I get a full list of values.
    1 2 3 4 5 6 7 8 9 10
    2. Now, if the table contains
    I     GT     3     0
    I get
    4 5 6 7 8 10 as expected
    3. If the table seltab contains
    I     NE     3     0
    the result also seems to be ok
    1 2 4 5 6 7 8 9 10
    The final question is. Is it true that the logical expression IN cannot be used with IF for more complex data comparisions?
    Thanks,
    Grzegorz

    Well,  the contents
    I GT 3 0
    I NE 8 0
    I read as:
    the variable fulfills the condition if is greater than 3 and not equal 8
    test > 3 and
    test <> 8
    which should give the result from the example above a list
    4 5 6 7 9 10
    What is wrong with my analysis?
    Regards,
    Grzegorz

  • Question about GLPCA table

    hi gurus,
    i need to find a relation between profit center and G/L account to do same reporting on charges by profit centers.
    in table GLPCA i have the fields i need, but i have a question: are all the charges traced in this table (GLPCA) ? in other words are all the vendors invoices present in this table.
    Thank you.

    Only for those entries in which profit center is assigned.
    Eg :Say expenses or revenue for which profit center is assigned directly. only the expenses line item(i.e Dr) or revenue line item (i.e CR) alone will be there in GLPCA.
    But in BSEG you can find both the entries(Dr & Cr) for the same doc, which you cant find in GLPCA.
    Hope you understood.
    else revert
    Reg
    Sujai

  • Question about temporary tables and imports.

    Hello everybody.
    Interesting one this. We have an application that uses global temporary tables. We refresh live to test using import of the application schema. I noticed when doing this yesterday that the temporary tables were being recreated in test as perminant rather than temporary.
    Is there a reason for this? Has anyone come across this before? Is there a way around it (apart from manual checking)?
    Many thanks
    Rup

    Could you specify how you found out that it is coming in as permanent?
    I believe exp/Imp will export and import it is Temporary table. I have just done a simple test to check and it works fine;
    Here is my test log:
    SQL> connect scott
    Enter password:
    Connected.
    SQL>SQL> CREATE GLOBAL TEMPORARY TABLE test_global
      2          (startdate DATE,
      3           enddate DATE,
      4           class CHAR(20))
      5        ON COMMIT DELETE ROWS;
    Table created.
    SQL>  select table_name,temporary from user_tables where table_name='TEST_GLOBAL';
    TABLE_NAME                     T
    TEST_GLOBAL                    Y
    SQL> select table_name,temporary from user_tables;
    TABLE_NAME                     T
    DEPT                           N
    EMP                            N
    BONUS                          N
    SALGRADE                       N
    TEST_GLOBAL                    Y
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    C:\Documents\dbmsdirect\Testing\global>exp
    Export: Release 10.2.0.2.0 - Production on Wed Feb 20 12:18:45 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Username: scott
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    Enter array fetch buffer size: 4096 >
    Export file: EXPDAT.DMP > test_global
    (2)U(sers), or (3)T(ables): (2)U > t
    Export table data (yes/no): yes >
    Compress extents (yes/no): yes >
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    Table(T) or Partition(T:P) to be exported: (RETURN to quit) > test_global
    . . exporting table                    TEST_GLOBAL
    Table(T) or Partition(T:P) to be exported: (RETURN to quit) >
    Export terminated successfully without warnings.
    C:\Documents\dbmsdirect\Testing\global>sqlplus /nolog
    SQL*Plus: Release 10.2.0.2.0 - Production on Wed Feb 20 12:19:50 2008
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    SQL> connect scott
    Enter password:
    Connected.
    SQL> drop table test_global purge;
    Table dropped.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    C:\Documents\dbmsdirect\Testing\global>imp
    Import: Release 10.2.0.2.0 - Production on Wed Feb 20 12:20:19 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Username: scott
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
    Import file: EXPDAT.DMP > test_global.DMP
    Enter insert buffer size (minimum is 8192) 30720>
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    List contents of import file only (yes/no): no >
    Ignore create error due to object existence (yes/no): no >
    Import grants (yes/no): yes >
    Import table data (yes/no): yes >
    Import entire export file (yes/no): no > yes
    . importing SCOTT's objects into SCOTT
    . importing SCOTT's objects into SCOTT
    Import terminated successfully without warnings.
    C:\Documents\dbmsdirect\Testing\global>sqlplus /nolog
    SQL*Plus: Release 10.2.0.2.0 - Production on Wed Feb 20 12:22:44 2008
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    SQL> connect scott
    Enter password:
    Connected.
    SQL>
    SQL> select table_name,temporary from user_tables;
    TABLE_NAME                     T
    DEPT                           N
    EMP                            N
    BONUS                          N
    SALGRADE                       N
    TEST_GLOBAL                    Y
    SQL> insert into TEST_GLOBAL values (sysdate,sysdate,'test1');
    1 row created.
    SQL> select * from TEST_GLOBAL;
    STARTDATE ENDDATE   CLASS
    20-FEB-08 20-FEB-08 test1
    SQL> commit;
    Commit complete.
    SQL> select * from TEST_GLOBAL;
    no rows selected
    SQL>  select table_name,temporary from user_tables where table_name='TEST_GLOBAL';
    TABLE_NAME                     T
    TEST_GLOBAL                    Y
    SQL>

  • Question about dual table

    Hi,
    Can anyone clarify me this
    Can I insert rows into dual table (being a programmer) ?? Using oracle 9i ... If no, can DBA do that ??? Say suppose, I have inserted 100 records in the dual table .. In any way, my applications are going to get hampered after inserting the records in the dual table
    Regards

    Or perhaps this:
    SQL> SELECT * FROM
      2  ( select rownum from dual connect by rownum <= 10 );
        ROWNUM
             1
             2
             3
             4
             5
             6
             7
             8
             9
            10
    10 rows selected.In 9i at least you most certainly can insert into sys.dual, which is a regular table in the SYS schema, although you deserve all you get if you do so:
    /Users/williamr: su - oracle
    Password:
    /Users/oracle: sqlplus '/ as sysdba'
    SQL*Plus: Release 9.2.0.1.0 - Developer's Release on Fri Mar 10 19:21:45 2006
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Developer's Release
    With the Partitioning and Oracle Data Mining options
    SQL> SELECT COUNT(*) FROM sys.dual;
      COUNT(*)
             1
    1 row selected.
    SQL> INSERT INTO dual VALUES ('Z');
    1 row created.
    SQL> SELECT COUNT(*) FROM sys.dual;
      COUNT(*)
             2
    1 row selected.
    SQL> set serverout on
    SQL> EXEC DBMS_OUTPUT.PUT_LINE(user);
    BEGIN DBMS_OUTPUT.PUT_LINE(user); END;
    ERROR at line 1:
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at "SYS.STANDARD", line 264
    ORA-06512: at line 1
    SQL> roll
    Rollback complete.
    SQL> EXEC DBMS_OUTPUT.PUT_LINE(user);
    SYS
    PL/SQL procedure successfully completed.
    SQL> So it is technically possible, but monumentally inadvisable to mess about with sys.dual.
    For a rather surreal suggestion related to this scenario, see:
    oracle-wtf.blogspot.com/2006/03/create-your-own-dual-table.html

  • Question about af:table

    Hi all, can you show me the way to turn off the "show all" option in af:table (when we use paging)? Thanks a lot!
    Harry.

    hi Harry
    I'm not sure what your question is, but the af:table component in the SRList.jspx page of the SRDemoSample application uses "paging".
      <af:table value="#{bindings.findServiceRequests1.collectionModel}"
             binding="#{backing_SRList.srTable}" var="row"
             rows="#{bindings.findServiceRequests1.rangeSize}"
             first="#{bindings.findServiceRequests1.rangeStart}"
             emptyText="#{bindings.findServiceRequests1.viewable ? 'No rows yet.' : 'Access Denied.'}"
             selectionState="#{bindings.findServiceRequests1.collectionModel.selectedRow}"
             selectionListener="#{bindings.findServiceRequests1.collectionModel.makeCurrent}">
      <!-- ... -->
      </af:table>(tip : You can use "Your Control Panel" to make your name visible in forum posts.)
    success
    Jan Vervecken

  • Simple beginner question about default tables (LOGMNR, etc.)

    Hi all, I just installed SQL Developer and the Express server. I have worked with other SQL databases and with remote Oracle databases. But never been a DBA for an Oracle DB. I tried searching on LOGMNR and AQ$_, but couldn't find anything, hence this question.
    If there is some online documentation that covers these particular tables and my question, then a pointer to that would be appreciated.
    Anyway, I created a connection per the instructions and it seemed to work. But when I look at tables listview, I see a whole slew of tables. I assume these are all housekeeping/meta data tables (eg. information_schema in MySQL). Is this correct?
    Also, short of manually creating a filter for each of the file types ("%$%"help", "LOGMNR%", "SQLPLUS%", etc.), is there an easy way to ignore these tables and just see the tables I want to create for testing purposes?
    And I am curios why these all show up by default. Could it be that I am expecting something because of experience with other databases and am actually doing things the "wrong" way? Or is this just because I am using the Express Server? Or is there another (very good) reason for these tables to exist?
    Thanks,
    justme

    The complete oracle documentation is at http://tahiti.oracle.com. There is a lot of it, but Log Miner and Advanced Queuing have their own manuals. You should also consider starting with the Concepts Manual followed by one of the "2-day" tutorials such as "2-Day DBA"
    What user are you connecting as? It sounds as if you are connecting as SYS or SYSTEM if you can see the dictionary tables.
    You should create a separate user to connect as.
    (create user myuser identified by mypassword;
    grant connect,resource to myuser;)

  • A question about sorting tables.

    Hello All!
    Just a quick question: performance wise, which is better, declare an itab with the 'sorted table of' or use the 'sort itab' later.
    Thanks in advance!
    Moderator message: please try yourself and search for available information and previous, similar discussions.
    Edited by: Thomas Zloch on Feb 23, 2012

    Hi Kevin,
    follow for example more or less the code given in this thread: Functionality to dynamically sort tableview columns and implement a <i>compare</i> method corresponding to your needs (for example using <i>compareToIgnoreCase</i> method of <i>String</i>).
    Hope it helps
    Detlev

Maybe you are looking for