SAPR3.FILESTAT # Table has no index

Hi,
I have "MISSING_INDEX" error in Table: SAPR3.FILESTAT # Table has no index on DB16 database check report.
How could i fix it? Please kindly advise.
Thanks!

Hello Petrie,
as far as I know:
the "activate and adjust database" function is used for solving inconsistencies between the DDIC and the database. If you activate and adjust a database it is deleted on the database system and then created fresh out of the DDIC. So no inconsistencies can happen. You can choose to keep the data in the table or not. If you choose to keep the data a second table is created with the suffix QCM<originaltablename> (I think) where the original content is stored temporarily. After deleting and recreating the original table the content is copied from the QCM<originaltablename> table into the new one.
Hope that helps,
Michael

Similar Messages

  • Table: USR02BKP Table has no index

    Hello,
    During weekly checkdb sheduled jobs from DB13 transaction we are getiing below mentioned warning/error again and again
    MISSING_INDEX :: Table: SAPDEV.USR02BKP # Table has no index
    But in Se11 or Sm30 i am unable to fine any table with name USR02BKP
    But this table is present in database in PSAPDEVUSR tablespace.
    How to resolve this issue.
    Regards,
    RR

    Hello,
    my crystal ball is showing me this:
    Your Oracle database administrator wanted to keep a backup of the current state of table USR02 and therefor created table USR02BKP as a copy of this table. He did this with Oracle tools, and so SAP does not know about this new table.
    Ask him whether this table is still needed.

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • In the parent table has a ROWID of an index of the child table FK!

    From the structure of the heap table is a problem.
    PK has a parent table.
    Child table has a FK.
    If you have FK to the parent table has a ROWID, I think in terms of speed to get the best performance.
    what do you think?
    Edited by: user5647868 on 2011. 9. 29 오후 10:46

    Of course, if the parent table FK ribildeuhal should rebuild the index.
    In reality, however, if the parent table, there were no ribildeuhaneun.
    If you need to rebuild a lot if you have a DOWNTIME.
    In some cases, even given the option to ALTER TABLE will be.
    Is the child table FK halgeot rebuild?
    FK using ROWID in the index of the parent decide to disable?
    And so on.

  • How to find table which has no index?

    Hi,
    How to find a table which has no index?
    Please provide the query to find the above.
    Thanks
    Jafar

    select owner,
           table_name
      from dba_tables dt
    where not exists(select 'x'
                        from dba_indexes di
                       where dt.owner=di.table_owner
                         and dt.table_name = di.table_name)

  • Error: "The search cannot be executed because the table has pending changes that would be lost."

    Hello,
    I'm working a developing an OA page that will displays the contents of an Oracle table and allows the user to update records in a table as needed.
    When I hit submit button to save the changes in the update page, the control goes back to main page (where all the table records are displayed). It displays the updated record with the new information.However when I hit "Go" button on the mainPG, I get the error "The search cannot be executed because the table has pending changes that would be lost. and the changes are not committed.
    ANy suggestions on where I should look will be greatly appreciated.
    Posting code for my controller
    =======================
              if ( pageContext.getParameter("saveRate") != null )
              personam.invokeMethod("saveRateToDatabase");
    Code from my AM
    =============
        public void saveRateToDatabase()
          getOADBTransaction().commit();
          System.out.println("40--After commit has been executed");
    Code from my VORowImpl
    ===================
    package cggv.oracle.apps.gl.server;
    import oracle.apps.fnd.framework.server.OAViewRowImpl;
    import oracle.jbo.domain.Date;
    import oracle.jbo.domain.Number;
    import oracle.jbo.server.AttributeDefImpl;
    // ---    File generated by Oracle ADF Business Components Design Time.
    // ---    Custom code may be added to this class.
    // ---    Warning: Do not modify method signatures of generated methods.
    public class xxCggGlRatesVORowImpl extends OAViewRowImpl {
        public static final int RATEID = 0;
        public static final int FROMCURRENCY = 1;
        public static final int TOCURRENCY = 2;
        public static final int FROMCONVERSIONDATE = 3;
        public static final int TOCONVERSIONDATE = 4;
        public static final int USERCONVERSIONTYPE = 5;
        public static final int CONVERSIONRATE = 6;
        public static final int MODEFLAG = 7;
        /**This is the default constructor (do not remove)
        public xxCggGlRatesVORowImpl() {
        /**Gets the attribute value for the calculated attribute RateId
        public Number getRateId() {
            return (Number) getAttributeInternal(RATEID);
        /**Sets <code>value</code> as the attribute value for the calculated attribute RateId
        public void setRateId(Number value) {
            setAttributeInternal(RATEID, value);
            //populateAttribute(RATEID, value);
        /**Gets the attribute value for the calculated attribute FromCurrency
        public String getFromCurrency() {
            return (String) getAttributeInternal(FROMCURRENCY);
        /**Sets <code>value</code> as the attribute value for the calculated attribute FromCurrency
        public void setFromCurrency(String value) {
            setAttributeInternal(FROMCURRENCY, value);      
        /**Gets the attribute value for the calculated attribute ToCurrency
        public String getToCurrency() {
            return (String) getAttributeInternal(TOCURRENCY);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ToCurrency
        public void setToCurrency(String value) {
            setAttributeInternal(TOCURRENCY, value);
        /**Gets the attribute value for the calculated attribute FromConversionDate
        public Date getFromConversionDate() {
            return (Date) getAttributeInternal(FROMCONVERSIONDATE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute FromConversionDate
        public void setFromConversionDate(Date value) {
            setAttributeInternal(FROMCONVERSIONDATE, value);      
        /**Gets the attribute value for the calculated attribute ToConversionDate
        public Date getToConversionDate() {
            return (Date) getAttributeInternal(TOCONVERSIONDATE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ToConversionDate
        public void setToConversionDate(Date value) {
            setAttributeInternal(TOCONVERSIONDATE, value);       
        /**Gets the attribute value for the calculated attribute UserConversionType
        public String getUserConversionType() {
            return (String) getAttributeInternal(USERCONVERSIONTYPE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute UserConversionType
        public void setUserConversionType(String value) {
            setAttributeInternal(USERCONVERSIONTYPE, value);
        /**Gets the attribute value for the calculated attribute ConversionRate
        public Number getConversionRate() {
            return (Number) getAttributeInternal(CONVERSIONRATE);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ConversionRate
        public void setConversionRate(Number value) {
            setAttributeInternal(CONVERSIONRATE, value);
        /**Gets the attribute value for the calculated attribute ModeFlag
        public String getModeFlag() {
            return (String) getAttributeInternal(MODEFLAG);
        /**Sets <code>value</code> as the attribute value for the calculated attribute ModeFlag
        public void setModeFlag(String value) {
            setAttributeInternal(MODEFLAG, value);      
        /**getAttrInvokeAccessor: generated method. Do not modify.
        protected Object getAttrInvokeAccessor(int index,
                                               AttributeDefImpl attrDef) throws Exception {
            switch (index) {
            case RATEID:
                return getRateId();
            case FROMCURRENCY:
                return getFromCurrency();
            case TOCURRENCY:
                return getToCurrency();
            case FROMCONVERSIONDATE:
                return getFromConversionDate();
            case TOCONVERSIONDATE:
                return getToConversionDate();
            case USERCONVERSIONTYPE:
                return getUserConversionType();
            case CONVERSIONRATE:
                return getConversionRate();
            case MODEFLAG:
                return getModeFlag();
            default:
                return super.getAttrInvokeAccessor(index, attrDef);
        /**setAttrInvokeAccessor: generated method. Do not modify.
        protected void setAttrInvokeAccessor(int index, Object value,
                                             AttributeDefImpl attrDef) throws Exception {
            switch (index) {
            case RATEID:
                setRateId((Number)value);
                return;
            case FROMCURRENCY:
                setFromCurrency((String)value);
                return;
            case TOCURRENCY:
                setToCurrency((String)value);
                return;
            case FROMCONVERSIONDATE:
                setFromConversionDate((Date)value);
                return;
            case TOCONVERSIONDATE:
                setToConversionDate((Date)value);
                return;
            case USERCONVERSIONTYPE:
                setUserConversionType((String)value);
                return;
            case CONVERSIONRATE:
                setConversionRate((Number)value);
                return;
            case MODEFLAG:
                setModeFlag((String)value);
                return;
            default:
                super.setAttrInvokeAccessor(index, value, attrDef);
                return;
        /**Gets xxCggGlRatesEO entity object.
        public xxCggGlRatesEOImpl getxxCggGlRatesEO() {
            return (xxCggGlRatesEOImpl)getEntity(0);

    Hi,
    Check these links:
    Oracle Apps: Search cannot be executed because the table has pending changes that would be lost
    Re: Getting error in search page search cannot be executed
    http://jneelmani.blogspot.in/2009/11/oaf-search-cannot-be-executed-because.html
    --Sushant

  • Table has 85 GB data space, zero rows

    This table has only one column. I ran a transaction that inserted more than a billion rows into this table but then rolled it back before completion.
    This table currently has zero rows but a select statement takes about two minutes to complete, and waits on I/O.
    The interesting thing here is that previous explanations to this were ghost records in case of deletes,
    there are none. m_ghostRecCnt is zeroed for all data pages.
    This is obviously not a situation in which the pages were placed in a deferred-drop queue either, or else the page count would be decreasing over time, and it is not.
    This is the output of DBCC PAGE for one of the pages:
    PAGE: (3:88910)
    BUFFER:
    BUF @0x0000000A713AD740
    bpage = 0x0000000601542000          bhash = 0x0000000000000000          bpageno = (3:88910)
    bdbid = 35                          breferences = 0                    
    bcputicks = 0
    bsampleCount = 0                    bUse1 = 61857                      
    bstat = 0x9
    blog = 0x15ab215a                   bnext = 0x0000000000000000          
    PAGE HEADER:
    Page @0x0000000601542000
    m_pageId = (3:88910)                m_headerVersion = 1                 m_type = 1
    m_typeFlagBits = 0x0                m_level = 0                        
    m_flagBits = 0x8208
    m_objId (AllocUnitId.idObj) = 99    m_indexId (AllocUnitId.idInd) = 256
    Metadata: AllocUnitId = 72057594044416000                                
    Metadata: PartitionId = 72057594039697408                                Metadata: IndexId = 0
    Metadata: ObjectId = 645577338      m_prevPage = (0:0)                  m_nextPage = (0:0)
    pminlen = 4                         m_slotCnt = 0                      
    m_freeCnt = 8096
    m_freeData = 7981                   m_reservedCnt = 0                   m_lsn
    = (1010:2418271:29)
    m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt
    = 0
    m_tornBits = -249660773             DB Frag ID = 1                      
    Allocation Status
    GAM (3:2) = ALLOCATED               SGAM (3:3) = NOT ALLOCATED          
    PFS (3:80880) = 0x40 ALLOCATED   0_PCT_FULL                              DIFF (3:6) = CHANGED
    ML (3:7) = NOT MIN_LOGGED           
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    Querying the allocation units system catalog shows that all pages are counted as "used".
    I saw some articles, such as the ones listed bellow, which addresses similar situations where pages arent deleted in a HEAP after a delete operation. It turns out pages are only deleted in a table when a table level lock is issued.
    http://blog.idera.com/sql-server/howbigisanemptytableinsqlserver/
    http://www.sqlservercentral.com/Forums/Topic1182140-392-1.aspx
    https://support.microsoft.com/kb/913399/en-us
    To rule this out, I inserted another 100k rows which caused no change on page counts, and then deleted all entries with a TABLOCK query hint. Only one page was deleted.
    So, it appears we have a problem with pages that were created during a transaction that was rolled back, huh? I guess rolling back a transaction doesn't take certain physical factors into consideration.
    I've looked everywhere but couldn't find a satisfactory answer to this. Does anybody have any ideas?
    Just because there are clouds in the sky it doesn't mean it isn't blue. Some people would disagree.

    And this is the reason why you should have heaps (unless your name is Thomas Kejser :-).
    Try TRUNCATE TABLE. Or ALTER TABLE tbl REBUILD.
    Erland Sommarskog, SQL Server MVP, [email protected]
    I rebuilt the HEAP a while ago, and then all pages were gone. I don't know if TRUNCATE would have the same results, I would have to repeat the test to find that out. There are many ways to fix the problem itself, including creating a clustered index as Satish
    suggested.
    Id like to focus on this interesting fact I wanted to bring to the table for discussion: You open a transaction, insert a huge load of records and then roll back. Why would the engine leave the pages created during the transaction behind? More specifically,
    why would they not be marked as "free pages" if they are all empty? Why are they not marked as free so scans would skip them and not generate a lot of I/O throughput and long response times just to query a zero row table? Isn't this like a design
    flaw or a bug?
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • How to load color table in a indexed mode file??

    How to load color table in a indexed mode file??
    Actually, if i opened a indexed file and want to edit color table by loading another color table from desktop (or any other location), any way to do this through java scripting??

    continuing...
    I wrote a script to read a color table from a GIF file and save to an ACT color table file. I think it might be useful for someone.
    It goes a little more deeper than the code I posted before, as it now identifies the table size. It is important because it tells how much data to read.
    Some gif files, even if they are saved with a reduced palette (less than 256), they have all the bytes for the full color palette filled inside the file (sometimes with 0x000000). But, some gif files exported in PS via "save for web" for example, have the color table reduced to optimize file size.
    The script store all colors into an array, allowing some kind of sorting, or processing at will.
    It uses the xlib/Stream.js in xtools from Xbytor
    Here is the code:
    // reads the color table from a GIF image
    // saves to an ACT color table file format
    #include "xtools/xlib/Stream.js"
    // read the 0xA byte in hex format from the gif file
    // this byte has the color table size info at it's 3 last bits
    Stream.readByteHex = function(s) {
      function hexDigit(d) {
        if (d < 10) return d.toString();
        d -= 10;
        return String.fromCharCode('A'.charCodeAt(0) + d);
      var str = '';
      s = s.toString();
         var ch = s.charCodeAt(0xA);
        str += hexDigit(ch >> 4) + hexDigit(ch & 0xF);
      return str;
    // hex to bin conversion
    Math.base = function(n, to, from) {
         return parseInt(n, from || 10).toString(to);
    //load test image
    var img = Stream.readFromFile("~/file.gif");
    hex = Stream.readByteHex(img);      // hex string of the 0xA byte
    bin = Math.base(hex,2,16);          // binary string of the 0xA byte
    tableSize = bin.slice(5,8)          // Get the 3 bit info that defines size of the ct
    switch(tableSize)
    case '000': // 6 bytes table
      tablSize = 2
      break;
    case '001': // 12 bytes table
      tablSize = 4
      break;
    case '010': // 24 bytes table
      tablSize = 8
      break;
    case '011': // 48 bytes table
      tablSize = 16
      break;
    case '100': // 96 bytes table
      tablSize = 32
      break;
    case '101': // 192 bytes table
      tablSize = 64
      break;
    case '110': // 384 bytes table
      tablSize = 128
      break;
    case '111': // 768 bytes table
      tablSize = 256
      break;
    //========================================================
    // read a color (triplet) from the color lookup table
    // of a GIF image file | return 3 Bytes Hex String
    Stream.getTbColor = function(s, color) {
      function hexDigit(d) {
        if (d < 10) return d.toString();
        d -= 10;
        return String.fromCharCode('A'.charCodeAt(0) + d);
      var tbStart = 0xD; // Start of the color table byte location
      var colStrSz = 3; // Constant -> RGB
      var str = '';
      s = s.toString();
         for (var i = tbStart+(colStrSz*color); i < tbStart+(colStrSz*color)+colStrSz; i++) {
              var ch = s.charCodeAt(i);
              str += hexDigit(ch >> 4) + hexDigit(ch & 0xF);
          return str;
    var colorHex = [];
    importColors = function (){
         for (i=0; i< tablSize; i++){ // number of colors
              colorHex[i] = Stream.getTbColor(img, i);
    importColors();
    // remove redundant colors
    // important to determine exact color number
    function unique(arrayName){
         var newArray=new Array();
         label:for(var i=0; i<arrayName.length;i++ ){ 
              for(var j=0; j<newArray.length;j++ ){
                   if(newArray[j]==arrayName[i])
                        continue label;
              newArray[newArray.length] = arrayName[i];
         return newArray;
    colorHex = unique(colorHex);
    // we have now an array with all colors from the table in hex format
    // it can be sorted if you want to have some ordering to the exported file
    // in case, add code here.
    var colorStr = colorHex.join('');
    //=================================================================
    // Output to ACT => color triplets in hex format until 256 (Adr. dec 767)
    // if palette has less than 256 colors, is necessary to add the
    // number of colors info in decimal format to the the byte 768.
    ColorNum = colorStr.length/6;
    lstclr = colorStr.slice(-6); // get last color
    if (ColorNum < 10){
    ColorNum = '0'+ ColorNum;
    cConv = function (s){
         var opt = '';
         var str = '';
         for (i=0; i < s.length ; i++){
              for (j=0; j<2 ; j++){
                   var ch = s.charAt(i+j);
                   str += ch;
                   i ++;
              opt += String.fromCharCode(parseInt(str,16));
              str = '';
         return opt
    output = cConv(colorStr);
    // add ending file info for tables with less than 256 colors
    if (ColorNum < 256){
         emptyColors = ((768-(colorStr.length/2))/3);
         lstclr = cConv(lstclr);
         for (i=0; i < emptyColors ; i++){
              output += lstclr; // fill 256 colors
    output += String.fromCharCode(ColorNum) +'\xFF\xFF'; // add ending bytes
    Stream.writeToFile("~/file.act", output);
    PeterGun

  • How can I get the number of distinct records that each field of a DB table has?

    Hi everyone,
    I would like to know how to get he number of distinct records that each field of a DB table has. When tracing a SQL statement either in ST12 or ST05, in the plan execution, if the sentence made useage of an index, then I can click in the index name and see this kind of information (no. of distinct values for each field of that index).
    Can I do something like this but with the whole fields of a table?
    What I have found until now is in Tx ST10; I search for whatever kind of table statistics and then use the function of "Analyze table" (which takes me to Tx DB05). In here, I can enter a table and up to 5 fields in order to get the information that I want.
    Is there any other way to do this?
    Regards,
    David Reza

    Hi David,
    You can export the same to excel and sort as per requirement.
    Sorry is that what you are looking for ?
    Regards,
    Deepanshu Sharma

  • Table files and Index files 2GB on Windows 2003 Server SP2 32-bit

    I'm new to Oracle and I've ran into the problem where my Table files and Index files are > 2GB. I have an Oracle instance running version 10.2.0.3.0. I have a number of tables file and index files that have a current files size of 1.99GB. My Oracle crashes about three times a week because of a "Write Fault/Failure. I've detemined that the RDBM is trying to write a index or table files > 2GB. When this occurs it crashes.
    I've been reading the Oracle knowledge base that it suggest that there is a fix or release of Oracle 10g to resolve this problem. However, I've been unable to locate any fix or release to address my issue. Does such a fix or release exist? How do I address this issue? I'm from the world of MS SQL and IBM DB2 and we don't have this issue. I am running and NTFS files system. Could this be issue be related to an Windows Fix?
    Surely Oracle can handel databases > 2GB.
    Thanks in advance for any help.

    After reading your response it appears that my real problem has to do with checking pointing. I've included below a copy of the error message:
    Oracle process number: 8
    Windows thread id: 3768, image: ORACLE.EXE (CKPT)
    *** 2008-07-27 16:50:13.569
    *** SERVICE NAME:(SYS$BACKGROUND) 2008-07-27 16:50:13.569
    *** SESSION ID:(219.1) 2008-07-27 16:50:13.569
    ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
    ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
    ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    error 221 detected in background process
    ORA-00221: Message 221 not found; No message file for product=RDBMS, facility=ORA
    ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
    ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
    ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    Can you tell me why I'm having issues with checking point and the control file?
    Can I rebuild the control file if it s corrupt?
    The problem has been going on since April 2008. I'm takening over the system.
    Thanks

  • Sort a table with no index or key defined?

    Greetz!
    I've got a script that builds a table, table A, by doing a select into. Later, a select is done on that table and an order by added. However the sort order is not persisting. Could this be due to there being no indexes or primary keys defined
    on table A? Can a sort be done on a table without an index of any kind?
    Thanks!
    Love them all...regardless. - Buddha

    To add to Erland's comment
    You cannot "look" directly at the contents of a table - you must use a query to generate a resultset.  And a resultset, just like a table, has no defined order unless it was generated using an order by clause.  Any order you might observe
    is simply an artifact of your data, the load on the db engine, the currently cached data, and many other factors.  If you see a consistent order, you might be tempted to assume such an order will always exist.  Don't be tempted.  Many others
    have been so tempted and have discovered the incorrectness of this assumption at a later date - often at an inconvenient time.

  • How to avoid this full table scan (and index FFS) ?

    Hi All,
    Oracle 11.2 on Linux.
    See this query and its plan below
    SQL> DELETE
      2  FROM  TABLEA APE
      3        WHERE   NOT EXISTS
      4                   (SELECT   1
      5                      FROM   TABLEB AP
      6                     WHERE       AP.col1 = APE.col1
      7                             AND AP.col2 = APE.col2
      8                             AND AP.col3 = APE.col3)
      9  AND ROWNUM < 51 ;
    50 rows deleted.
    Elapsed: 00:12:01.07
    Execution Plan
    Plan hash value: 1740911877
    | Id  | Operation               | Name                  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time  |
    |   0 | DELETE STATEMENT        |                       |    50 |  2650 |       |   573K  (1)| 01:54:40 |
    |   1 |  DELETE                 | TABLEA                |       |       |       |            |       |
    |*  2 |   COUNT STOPKEY         |                       |       |       |       |            |       |
    |*  3 |    HASH JOIN RIGHT ANTI |                       |    80M|  4059M|  1775M|   573K  (1)| 01:54:40 |
    |   4 |     INDEX FAST FULL SCAN| TABLEB_UK             |    47M|  1228M|       | 96480   (1)| 00:19:18 |
    |   5 |     TABLE ACCESS FULL   | TABLEA                |    80M|  1991M|       |   243K  (1)| 00:48:42 |
    ---------------------------------------------------------------------------------------------------------In both tables, TABLEA and TABLEB, there is index on columns col1-col2-col3 as leading columns (TABLEB has few more columns in the index, but after these 3 columns).
    Requirement is, I want to delete first 50 records in TABLEA, which does not exist in TABLEB.
    I tried with various hints, but Oracle is always doing a full scan on one of the tables and index FFS on other. In some cases, Oracle did full scan on both tables and then deleted 50 records. Stats is up-to-date. Doing a full scan on tables with 80 million and 47 million rows is a bit too much for deleting 50 rows.
    How I can make Oracle do
    1) Read TABLEA row-by-row
    2) for each row, check if it exists in TABLEB
    3) If not exists, then delete row from TABLEA, else continue
    4) Stop reading TABLEA after we have deleted 50 records
    Thanks in advance

    >
    >
    Oracle 11.2 on Linux.
    SQL> DELETE
    2  FROM  TABLEA APE
    3        WHERE   NOT EXISTS
    4                   (SELECT   1
    5                      FROM   TABLEB AP
    6                     WHERE       AP.col1 = APE.col1
    7                             AND AP.col2 = APE.col2
    8                             AND AP.col3 = APE.col3)
    9  AND ROWNUM < 51 ;
    50 rows deleted.
    Elapsed: 00:12:01.07
    Execution Plan
    Plan hash value: 1740911877
    | Id  | Operation               | Name                  | Rows  | Bytes |TempSpc| Cost (%CPU)| Time  |
    |   0 | DELETE STATEMENT        |                       |    50 |  2650 |       |   573K  (1)| 01:54:40 |
    |   1 |  DELETE                 | TABLEA                |       |       |       |            |       |
    |*  2 |   COUNT STOPKEY         |                       |       |       |       |            |       |
    |*  3 |    HASH JOIN RIGHT ANTI |                       |    80M|  4059M|  1775M|   573K  (1)| 01:54:40 |
    |   4 |     INDEX FAST FULL SCAN| TABLEB_UK             |    47M|  1228M|       | 96480   (1)| 00:19:18 |
    |   5 |     TABLE ACCESS FULL   | TABLEA                |    80M|  1991M|       |   243K  (1)| 00:48:42 |
    ---------------------------------------------------------------------------------------------------------Requirement is, I want to delete first 50 records in TABLEA, which does not exist in TABLEB.
    Such requirements usually make me curious - what's special about a randomly selected 50 rows ?
    Is this trying to delete the data in batches of 50 rows at a time.
    How I can make Oracle do
    1) Read TABLEA row-by-row
    2) for each row, check if it exists in TABLEB
    3) If not exists, then delete row from TABLEA, else continue
    4) Stop reading TABLEA after we have deleted 50 records
    It look's as if a 'no_unnest' hint in the subquery should do what you want. It should make Oracle run the quey with a FILTER subquery. You could then choose to drive the delete through a tablescan of tableA or an index range scan of the index on tableA. Have you considered the effect of (and requirements relating to) nulls in the three columns of either table ?
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Table access by index rowid taking more time

    Hi All
    I've a query like
    update tab1
      set col1 = ( select col2 from
                           tab2 
                    where tab1.id = tab2.id) table 1 has arnd 10,000 rows
    table 2 has arnd 1,700,000 rows and has a primay key on column id.
    This query is taking around 20 secs to execute. I checked the xplan and most of time taken for table access by index rowid.
    Could you please suggest what can be the reason for this. (Can it be the clustering factor or something else)
    I checked the stats for the tab2, its just three days old.
    Regards
    Ashwani

    >
    table 1 has arnd 10,000 rows
    table 2 has arnd 1,700,000 rows and has a primay key on column id.
    This query is taking around 20 secs to execute. I checked the xplan and most of time taken for table access by index rowid.
    Could you please suggest what can be the reason for this. (Can it be the clustering factor or something else)
    I checked the stats for the tab2, its just three days old.
    >
    If you checked the xplan why haven't you posted it so we can look at it? Then we could see what table is being accessed by index rowid. Presumably it is table 2 but we the plan would eliminate the need to make assumptions.
    The clustering factor could be a factor. You haven't told us how table1 is being accessed. All rows are being updated so a full table scann is most likely but again the plan would actually show the access.
    Did you query the dictionary to see what the clustering factor is? Post the results of that
    SQL> select index_name, leaf_blocks, avg_leaf_blocks_per_key, avg_data_blocks_per_key, clustering_factor, distinct_keys
    2 from dba_indexes
    3 where owner = 'schema'
    4 and index_name in ('index_b','index_a');

  • Show scrollpane only when table has x rows

    Is there any way to not make use of scrollpane when a table currently has less that 5 rows ?
    I have my table embedded in a JScrollPane, but I want to show the scrollpane only if my table has atleast
    'x' number of rows.
    Thanks,

    Something like...
    DefaultTableModel tm = new DefaultTableModel(data, index);
    int size = tm.getRowCount();
    JTable t = new JTable(tm);
    JPanel p = new JPanel();
    if(size > 5)
      JScrollPane jsp = new JScrollPane(t);
      p.add(jsp);
    else
      p.add(t);
    // add p to JFrame

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

Maybe you are looking for