Table has 85 GB data space, zero rows

This table has only one column. I ran a transaction that inserted more than a billion rows into this table but then rolled it back before completion.
This table currently has zero rows but a select statement takes about two minutes to complete, and waits on I/O.
The interesting thing here is that previous explanations to this were ghost records in case of deletes,
there are none. m_ghostRecCnt is zeroed for all data pages.
This is obviously not a situation in which the pages were placed in a deferred-drop queue either, or else the page count would be decreasing over time, and it is not.
This is the output of DBCC PAGE for one of the pages:
PAGE: (3:88910)
BUFFER:
BUF @0x0000000A713AD740
bpage = 0x0000000601542000          bhash = 0x0000000000000000          bpageno = (3:88910)
bdbid = 35                          breferences = 0                    
bcputicks = 0
bsampleCount = 0                    bUse1 = 61857                      
bstat = 0x9
blog = 0x15ab215a                   bnext = 0x0000000000000000          
PAGE HEADER:
Page @0x0000000601542000
m_pageId = (3:88910)                m_headerVersion = 1                 m_type = 1
m_typeFlagBits = 0x0                m_level = 0                        
m_flagBits = 0x8208
m_objId (AllocUnitId.idObj) = 99    m_indexId (AllocUnitId.idInd) = 256
Metadata: AllocUnitId = 72057594044416000                                
Metadata: PartitionId = 72057594039697408                                Metadata: IndexId = 0
Metadata: ObjectId = 645577338      m_prevPage = (0:0)                  m_nextPage = (0:0)
pminlen = 4                         m_slotCnt = 0                      
m_freeCnt = 8096
m_freeData = 7981                   m_reservedCnt = 0                   m_lsn
= (1010:2418271:29)
m_xactReserved = 0                  m_xdesId = (0:0)                    m_ghostRecCnt
= 0
m_tornBits = -249660773             DB Frag ID = 1                      
Allocation Status
GAM (3:2) = ALLOCATED               SGAM (3:3) = NOT ALLOCATED          
PFS (3:80880) = 0x40 ALLOCATED   0_PCT_FULL                              DIFF (3:6) = CHANGED
ML (3:7) = NOT MIN_LOGGED           
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Querying the allocation units system catalog shows that all pages are counted as "used".
I saw some articles, such as the ones listed bellow, which addresses similar situations where pages arent deleted in a HEAP after a delete operation. It turns out pages are only deleted in a table when a table level lock is issued.
http://blog.idera.com/sql-server/howbigisanemptytableinsqlserver/
http://www.sqlservercentral.com/Forums/Topic1182140-392-1.aspx
https://support.microsoft.com/kb/913399/en-us
To rule this out, I inserted another 100k rows which caused no change on page counts, and then deleted all entries with a TABLOCK query hint. Only one page was deleted.
So, it appears we have a problem with pages that were created during a transaction that was rolled back, huh? I guess rolling back a transaction doesn't take certain physical factors into consideration.
I've looked everywhere but couldn't find a satisfactory answer to this. Does anybody have any ideas?
Just because there are clouds in the sky it doesn't mean it isn't blue. Some people would disagree.

And this is the reason why you should have heaps (unless your name is Thomas Kejser :-).
Try TRUNCATE TABLE. Or ALTER TABLE tbl REBUILD.
Erland Sommarskog, SQL Server MVP, [email protected]
I rebuilt the HEAP a while ago, and then all pages were gone. I don't know if TRUNCATE would have the same results, I would have to repeat the test to find that out. There are many ways to fix the problem itself, including creating a clustered index as Satish
suggested.
Id like to focus on this interesting fact I wanted to bring to the table for discussion: You open a transaction, insert a huge load of records and then roll back. Why would the engine leave the pages created during the transaction behind? More specifically,
why would they not be marked as "free pages" if they are all empty? Why are they not marked as free so scans would skip them and not generate a lot of I/O throughput and long response times just to query a zero row table? Isn't this like a design
flaw or a bug?
Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
is an undocumented behavior and should not be relied upon.

Similar Messages

  • DAC not populating FACT table for the GL module - W_GL_OTHER_F (zero rows)

    All - We did our FULL loads from the Oracle Financials 11i into OBAW and got data in most of the dimension tables populated. HOWEVER, i am not seeing anything populating into the fact table W_GL_OTHER_F (zero rows). Following is a list of all dims / facts i am focusing towards for the GL module.
    W_BUSN_LOCATION_D     (8)
    W_COST_CENTER_D     (124)
    W_CUSTOMER_FIN_PROFL_D     (6,046)
    W_CUSTOMER_LOC_D     (4,611)
    W_CUSTOMER_LOC_D     (4,611)
    W_CUSTOMER_LOC_D     (4,611)
    W_DAY_D     (11,627)
    W_DAY_D     (11,627)
    W_DAY_D     (11,627)
    W_GL_ACCOUNT_D     (28,721)
    W_INT_ORG_D     (171)
    W_INVENTORY_PRODUCT_D     (395)
    W_LEDGER_D     (8)
    W_ORG_D     (3,364)
    W_PRODUCT_D     (21)
    W_PROFIT_CENTER_D     (23)
    W_SALES_PRODUCT_D     (395)
    W_STATUS_D     (7)
    W_SUPPLIER_D     (6,204)
    W_SUPPLIER_PRODUCT_D     (0)
    W_TREASURY_SYMBOL_D     (0)
    W_GL_OTHER_F <------------------------------------- NO FACT DATA AT ALL
    Question for the group: Are there any specific settings which might be preventing us from getting data loaded into our FACT tables? I was doing research and found the following on the internet:
    Map Oracle General Ledger account numbers to Group Account Numbers using the following file file_group_acct_names_ora.csv. Is this something that is necessary?
    Any help / guidance / pointers are greatly appreciated.
    Regards,

    There are many things to configure before your first full load.
    For the configuartion steps see Oracle Business Intelligence Applications Configuration Guide for Informatica PowerCenter Users
    - http://download.oracle.com/docs/cd/E13697_01/doc/bia.795/e13766.pdf (7951)
    - http://download.oracle.com/docs/cd/E14223_01/bia.796/e14216.pdf (796)
    3 Configuring Common Areas and Dimensions
    3.1 Source-Independent Configuration Steps
    Section 3.1.1, "How to Configure Initial Extract Date"
    Section 3.1.2, "How to Configure Global Currencies"
    Section 3.1.3, "How to Configure Exchange Rate Types"
    Section 3.1.4, "How to Configure Fiscal Calendars"
    3.2 Oracle EBS-Specific Configuration Steps
    Section 3.2.1, "Configuration Required Before a Full Load for Oracle EBS"
    Section 3.2.1.1, "Configuration of Product Hierarchy (Except for GL, HR Modules)"
    Section 3.2.1.2, "How to Assign UNSPSC Codes to Products"
    Section 3.2.1.3, "How to Configure the Master Inventory Organization in Product Dimension Extract for Oracle 11i Adapter (Except for GL & HR Modules)"
    Section 3.2.1.4, "How to Map Oracle GL Natural Accounts to Group Account Numbers"
    Section 3.2.1.5, "How to make corrections to the Group Account Number Configuration"
    Section 3.2.1.6, "About Configuring GL Account Hierarchies"
    Section 3.2.1.7, "How to set up the Geography Dimension for Oracle EBS"
    Section 3.2.2, "Configuration Steps for Controlling Your Data Set for Oracle EBS"
    Section 3.2.2.1, "How to Configure the Country Region and State Region Name"
    Section 3.2.2.2, "How to Configure the State Name"
    Section 3.2.2.3, "How to Configure the Country Name"
    Section 3.2.2.4, "How to Configure the Make-Buy Indicator"
    Section 3.2.2.5, "How to Configure Country Codes"
    5.2 Configuration Required Before a Full Load for Financial Analytics
    Section 5.2.1, "Configuration Steps for Financial Analytics for All Source Systems"
    Section 5.2.2, "Configuration Steps for Financial Analytics for Oracle EBS"
    Section 5.2.2.1, "About Configuring Domain Values and CSV Worksheet Files for Oracle Financial Analytics"
    Section 5.2.2.2, "How to Configure Transaction Types for Oracle General Ledger and Profitability Analytics (for Oracle EBS R12)"
    Section 5.2.2.3, "How to Configure Transaction Types for Oracle General Ledger and Profitability Analytics (for Oracle EBS R11i)"
    Section 5.2.2.4, "How to Specify the Ledger or Set of Books for which GL Data is Extracted"
    5.3 Configuration Steps for Controlling Your Data Set
    Section 5.3.1, "Configuration Steps for Financial Analytics for All Source Systems"
    Section 5.3.2, "Configuration Steps for Financial Analytics for Oracle EBS"
    Section 5.3.2.1, "How GL Balances Are Populated in Oracle EBS"
    Section 5.3.2.2, "How to Configure Oracle Profitability Analytics Transaction Extracts"
    Section 5.3.2.3, "How to Configure Cost Of Goods Extract (for Oracle EBS 11i)"
    Section 5.3.2.4, "How to Configure AP Balance ID for Oracle Payables Analytics"
    Section 5.3.2.5, "How to Configure AR Balance ID for Oracle Receivables Analytics and Oracle General Ledger and Profitability Analytics"
    Section 5.3.2.6, "How to Configure the AR Adjustments Extract for Oracle Receivables Analytics"
    Section 5.3.2.7, "How to Configure the AR Schedules Extract"
    Section 5.3.2.8, "How to Configure the AR Cash Receipt Application Extract for Oracle Receivables Analytics"
    Section 5.3.2.9, "How to Configure the AR Credit-Memo Application Extract for Oracle Receivables Analytics"
    Section 5.3.2.10, "How to Enable Project Analytics Integration with Financial Subject Areas"
    Also, another reason I had was that if you choose not to filter by set of books/type then you must still include the default values set of books id 1 and type NONE in the list on the DAC parameters (this is a consequence of strange logic in the decode statements for the sql in the extract etl).
    I would recomend you run the extract fact sql prior to running your execution plan to sanity check how many rows you expect to get. For example, to see how many journal lines, use informatica designer to view mapplet SDE_ORA11510_Adaptor.mplt_BC_ORA_GLXactsJournalsExtract.2.SQ_GL_JE_LINES.3 in mapping SDE_ORA11510_Adaptor.SDE_ORA_GLJournals then cut paste the SQL to run your favourite tool such as sqldeveloper. You will have to fill in the $$ variables yourself.

  • Performance problem on a table with zero rows

    I queried the v$sqlarea table to find which SQL statement was doing the most disk reads and it turned out to be a query that read 278 Gigabytes from a table which usually has zero rows. This query runs every minute and reads any rows that are in the table. It does not do any joins. The process then processes the rows and deletes them from the table. Over the course of a day, a few thousand rows may be processed this way with a row length of about 80 bytes. This amounts to a few kilobytes, not 278 Gig over a couple of days. Buffer gets were even higher at 295 Gig. Note that only the query that reads the table is doing these disk reads, not the rest of the process.
    There are no indexes on the table, so a full table scan is done. The query reads all the rows, but usually there are zero. I checked the size of the table in dba_segments, and it was 80 Meg. At one point months ago, during a load, the table had 80 Meg of data in it, but this was deleted after being processed. The size of the table was never reduced.
    I can only assume that Oracle is doing a full table scan on all 80 Meg of this table every minute. Even when there are zero rows. Dividing the buffer gets in bytes by the number of executions yields 72 Meg which is close to the table size. Oracle is reading the entire table size from disk even when the table has zero rows.
    The solution was to truncate the table. This helped immediately and reduced the disk reads to zero most of the time. The buffer gets were also reduced to 3 per execution when the table was empty. The automatic segment manager reduced the size of the table to 64k overnight.
    Our buffer cache hit ratio was a dismal 72%. It should go up now that this problem has been resolved.
    Table statistics are gathered every week. We are running Oracle 9.2 on Solaris.
    Note that this problem is already resolved. I post it because it is an interesting case.
    Kevin Tyson, OCP
    DaimlerChrysler Tech Center, Auburn Hills, MI

    Kevin,
    The solution was to truncate the tableThis is not a scoop... isn't it ?
    Table statistics are gathered every weekIs there any reason for that ?
    If stats ran when no rows, perf can be very bad after loading data, and if stats ran when thousand rows, perf can be very bad after deleting. Perhaps can you find a more restrictive stat running ?
    Nicolas.
    Message was edited by:
    N. Gasparotto

  • How to use EVS with different data in each row, in a Java Web Dynpro table?

    Hi all,
    I am using EVS in a column of java web dynpro table.
    Let's say the name, and context attribute, of this column is column1.
    It's filled dynamically using an RFC, that uses as input parameter the value of another column, and related context attribute, from the same table (Let's call it column2).  Obviously, from the same row. So, in other words: the values of the EVS in column1 of row1, are dependent of the value of column2 of row1. And the values of the EVS in column1 of row2, are dependent of the value of column2 of row2. And so on... Hope i could explain myself ok.
    The code I'm using works great for filling the EVS dynamically:
    IWDAttributeInfo attrInfo = wdContext.nodeDetail().getNodeInfo().getAttribute(nodeElement.COLUMN1);
    ISimpleTypeModifiable siType = attrInfo.getModifiableSimpleType();
    IModifiableSimpleValueSet<String> value = siType.getSVServices().getModifiableSimpleValueSet();
    value.clear();
    if(this.initRFC_Input(nodeElement.getColumn2())){
         for (int i = 0; i < wdContext.nodeRFCresult().size(); i++){
              value.put(wdContext.nodeRFCresult().getRFCresultElementAt(i).getLgort()
                 , wdContext.nodeRFCresult().getRFCresultElementAt(i).getLgobe());
    In this code, nodeElement is the context row of the table that is passed dynamically to the method when the value of colum2 is changed.
    HOWEVER, the problem I'm having is that after executing this code, EACH NEW ROW that is added to the table has by default the same values as the first row, in the column1 EVS. And, for example, if I refresh the values of the column1 EVS in row 2, all EVS values in the other rows are also refreshed with the same values as the ones of EVS in row 2.
    How can I make sure each row EVS has its own set of independent values, so they don't mess with each other?
    Hope you guys can help me. And please, let me know if I didn't explain myself correctly!
    Thanks!

    I just did as you said (I think), but it's still having the same behaviour as before (same data for all EVS in the table).
    Here´s what I did:
    I
    In node "Detail" (cardinality 0...n, singleton set to true), which is binded to the table, I created a child node named "Column1Values" wth cardinality 1...1 and singleton set to false.
    "Column1Values" node has an attribute called "column1", of type String.
    I did the binding between attribute "column1" and the column1 inputfield celleditor in the table.
    I created an event called Column2Changed and binded it to the column2 celleditor of the table. I added a parameter called nodeElement of type IPrivateCompView.IDetailElement to this event, and mapped it to the column2 editor in the table so that I can dynamically get the nodeElement that is being affected.
    I added the following code to the onActionColumn2Changed(wdEvent, nodeElement) method that gets created in the view:
    IWDAttributeInfo attrInfo = nodeElement.nodeColumn1Values().getNodeInfo().getAttribute("column1");
    ISimpleTypeModifiable siType = attrInfo.getModifiableSimpleType();
    IModifiableSimpleValueSet<String> value = siType.getSVServices().getModifiableSimpleValueSet();
    if(this.initRFC_Input(nodeElement.getColumn2())){
         for(int i =0; i < wdContext.nodeRFCresults().size(); i++){
              value.put(wdContext.nodeRFCresults().getRFCresultsElementAt(i).getId(),
                                  wdContext.nodeRFCresults().getRFCresultsElementAt(i).getDesc());
    And with this, I still get the original problem... When the EVS of one row is updated, ALL other EVS of the table get also updated with the same values.
    What am I missing? Sorry Govardan, I bet I'm not seeing something really obvious... hopefully you can point me in the right direction.
    Thanks!

  • Show scrollpane only when table has x rows

    Is there any way to not make use of scrollpane when a table currently has less that 5 rows ?
    I have my table embedded in a JScrollPane, but I want to show the scrollpane only if my table has atleast
    'x' number of rows.
    Thanks,

    Something like...
    DefaultTableModel tm = new DefaultTableModel(data, index);
    int size = tm.getRowCount();
    JTable t = new JTable(tm);
    JPanel p = new JPanel();
    if(size > 5)
      JScrollPane jsp = new JScrollPane(t);
      p.add(jsp);
    else
      p.add(t);
    // add p to JFrame

  • How to select data from 3rd row of Excel to insert into Sql server table using ssis

    Hi,
    Iam having Excel files with headers in first two rows , i want two skip that two rows and select data from 3rd row to insert into Sql Server table using ssis.3rd row is having column names.

                                                         CUSTOMER DETAILS
                         REGION
    COL1        COL2        COL3       COL4           COL5          COL6          COL7
           COL8          COL9          COL10            COL11      
    1            XXX            yyyy         zzzz
    2            XXX            yyyy        zzzzz
    3           XXX            yyyy          zzzzz
    4          XXX             yyyy          zzzzz
    First two rows having cells merged and with headings in excel , i want two skip the first two rows and select the data from 3rd row and insert into sql server using ssis
    Set range within Excel command as per below
    See
    http://www.joellipman.com/articles/microsoft/sql-server/ssis/646-ssis-skip-rows-in-excel-source-file.html
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to dump table to the flat file, if the table has NVARCHAR data type fie

    I need to dump the table to the text file. The table has NVARCHAR data type field and simple select * from table_name does not work. What do I have to do? Do I need convert NVARCHAR to VARCHAR and if yes, then how?
    Thanks,
    Oleg

    I need to dump the table to the text file. The table has NVARCHAR data type field and simple select * from table_name does not work. What do I have to do? Do I need convert NVARCHAR to VARCHAR and if yes, then how?
    Thanks,
    Oleg

  • Order table data with comparing rows

    Hi,
    My question is...
    I have table with with 5 columns this actually is been loaded from a file and there is no direct PK, but for that combination of columns as used to make them unique
    code, person, case
    Table X
    CODE             CASE              OLD_ID                NEW_ID            PERSON       AUTH
    01              ab122         1234               0001             AU123     99393
    07              vv353          7872               0919             FV982     78282
    01              ab122         1982               9929             AU123     99393
    04               hjsss         8839                8302            JK920     32320
    01              ab122         0001               1982             AU123     99393
    05              cg899         6728               32322           IKL020     65252
    07              w353          0919                8282             FV982     78282
    now I need to order this data comparing row values of old_id to new_id for each of the combinations of code, person, case
    need output like below
    Table X
    CODE             CASE              OLD_ID                NEW_ID            PERSON       AUTH
    01              ab122         1234               0001             AU123     99393
    01              ab122         0001               1982             AU123     99393
    01              ab122         1982               9929             AU123     99393
    04               hjsss         8839                8302            JK920     32320
    05              cg899         6728              32322           IKL020     65252
    07              vv353          7872               0919             FV982     78282
    07              w353          0919                8282            FV982     78282
    hot to get this result, any help is much appreciated..
    Thanks,
    AK.

    Yes you are right, I took it down to edit this in that time you have posted the message...
    Question:
    Table data need to be sorted by matching old_id with new_id
    actual table : here rows are randomly arranged
    Table X
    CODE             CASE              OLD_ID                NEW_ID            PERSON       AUTH
    01              ab122         1234               0001             AU123     99393
    07              vv353          7872               0919             FV982     78282
    01              ab122         1982               9929             AU123     99393
    04               hjsss         8839                8302            JK920     32320
    01              ab122         0001               1982             AU123     99393
    05              cg899         6728               32322           IKL020     65252
    07              w353          0919                8282             FV982     78282
    output table: here if you see old_id of row 2 is matched to new_id of row 1.. and so on
    Table X
    CODE             CASE              OLD_ID                NEW_ID            PERSON       AUTH
    01              ab122         1234               0001             AU123     99393
    01              ab122         0001               1982             AU123     99393
    01              ab122         1982               9929             AU123     99393
    04               hjsss         8839                8302            JK920     32320
    05              cg899         6728              32322           IKL020     65252
    07              vv353          7872               0919             FV982     78282
    07              w353          0919                8282            FV982     78282
    so, I need a query where I can generate this output..

  • Imp is not importing zero rows tables

    Hi....
    I have successfully exported the schema......
    But while importing into another DB, it is not importing zero row tables.....
    Why it is happening like that.....is there any specific reason for that......
    Plz help.....urgent....
    Thanks in advance.

    Hi
    export is fine...in the log file it is showing all the zero row tables being exported
    but while importing .....in the log it is missing the 5 zero row tables....
    any way....
    exp system/password file=/home/oracle/export_backups_manual/schema_backup_09_05_2011.dmp owner=schema_name log=/home/oracle/export_backups_manual/schema_backup_09_05_2011.log
    imp system/password file=/home/oracle/schema_backup_09_05_2011.dmp fromuser=schema_from touser=schema_to log=/home/oracle/schema_backup_09_05_2011.imp.log
    Thanks

  • Need to hide row when table has 1 entry in adobe

    Dear Experts,
    I have made select statement in Initialization and in context i have called Table EKPO and under that EKET based on EBELP where clause. Then I have called Sub form for both Tables and made as EKPO(Role Body Row) EKET(Role Table for subform1, Role Body row for subform2)
    Eg: Table EKPO
                  Table EKET
    Need to hide if EKET has one row for that, I want to know number of rows in table, if row 1 then need to hide otherwise need to show in adobe.
    Sharrad Dixit
    dixitasharad at gmail

    Hi A,
    I hope as per your previous post, you might have already set the presentation variable. You can write the column formula now as:
    case when @{variables.country} = 'All Choices' then sum(revenue) by year else <your previous case to hide the USA column} end.
    Hope this helps.
    Thank you,
    Dhar

  • APP has been downloaded,But can't see the specific data, the data is zero.

    APP has been downloaded,But can't see the specific data, the data is zero.
    ID:[email protected]
         [email protected]
    Waiting for your reply,thank you!

    Analytics can be accessed from the account under which folios are published for that app. If you don't see it under the account, I think it's best you contact support. Login to your DPS dashboard and click on "Contact Support" at the bottom left

  • My iMac4 has depleted its disk space, a fter numerous rebuilds the database is still generating error messages My external HD is basically my storage for all data. Is it feasible to purchase Memory...?

    If this is a viable option how would I connect the memory in an effort to purchase additional disk space and upgrade to Lion? I currently use Snow Leopard as my OS but there are too many compatibility issues to consider. Thank you for your response. Cheers.

    You are confusing RAM (Random Access Memory) with data storage space on a hard drive.
    Cloud ( offsite server data storage) services to purchase remote data storage are NOT going to help you with this issue.
    You need to free up disk space on your Mac's internal hard drive.
    Here are some general tips to keep your Mac's hard drive trim and slim as possible
    You should never, EVER let a computer hard drive get completely full, EVER!
    With Macs and OS X, you shouldn't let the hard drive get below 15 GBs or less of free data space.
    If it does, it's time for some hard drive housecleaning.
    Follow some of my tips for cleaning out, deleting and archiving data from your Mac's internal hard drive.
    Have you emptied your Mac's Trash icon in the Dock?
    If you use iPhoto or Aperture, both have its own trash that needs to be emptied, also.
    If you store images in other locations other than iPhoto, then you will have to weed through these to determine what to archive and what to delete.
    If you are an iMovie/ Final Cut user, both apps have their own individual Trash location that needs to be emptied, too!
    If you use Apple Mail app, Apple Mail also has its own trash area that needs to be emptied, too!
    Delete any old or no longer needed emails and/or archive to disc, flash drives or external hard drive, older emails you want to save.
    Look through your other Mailboxes and other Mail categories to see If there is other mail you can archive and/or delete.
    STAY AWAY FROM DELETING ANY FILES FROM OS X SYSTEM FOLDER!
    Look through your Documents folder and delete any type of old useless type files like "Read Me" type files.
    Again, archive to disc, flash drives, ext. hard drives or delete any old documents you no longer use or immediately need.
    Look in your Applications folder, if you have applications you haven't used in a long time, if the app doesn't have a dedicated uninstaller, then you can simply drag it into the OS X Trash icon. IF the application has an uninstaller app, then use it to completely delete the app from your Mac.
    To find other large files, download an app called Omni Disk Sweeper.
    http://www.omnigroup.com/more
    Also, Find Any File
    http://apps.tempel.org/FindAnyFile/
    Download an app called OnyX for your version of OS X.
    http://www.titanium.free.fr/downloadonyx.php
    When you install and launch it, let it do its initial automatic tests, then go to the cleaning and maintenance tabs and run the maintenance tabs that let OnyX clean out all web browser cache files, web browser histories, system cache files, delete old error log files.
    Typically, iTunes and iPhoto libraries are the biggest users of HD space.
    move these files/data off of your internal drive to the external hard drive and deleted off of the internal hard drive.
    If you have any other large folders of personal data or projects, these should be archived or moved, also, to the optical discs, flash drives or external hard drive and then either archived to disc and/or deleted off your internal hard drive.
    Moving iTunes library
    http://support.apple.com/kb/HT1449
    Moving iPhoto library
    http://support.apple.com/kb/PH2506
    Moving iMovie projects folder
    http://support.apple.com/kb/ph2289
    Things to consider before moving your iPhoto Library Folder to a new or external location like an external hard drive.
    If you make movies on any iDevices using iMovie for iOS,, then transfer the video footage, the IOS version of iMovie saves the footage as a movie file in IPhoto for IOS and will automatically get transferred to iPhoto for the Mac when you upload the video from your iDevice.
    Newer versions of iMovie will work and link those video files found in your iPhoto Library on your Mac, but those links can be lost if you move your iPhoto library and you will not be able to relink that video afterwards as the current versions of iMovie seem to not have a relink option for the video portion of the files (ironically, current versions of iMovie HAVE the ability to re-link the audio files from the video footage, though  (The inability to re-link the video files could be a possible bug or oversight in current versions iMovie).
    The lost video links show up as "blacked-out" video blocks with no content.
    Before moving the iPhoto Library
    If you make movies with iMovie using iPad or iPhone video then 'Consolidate' the files before you finish. This will gather (albeit by duplicating) all the relevant files in the project in one place. After consolidating/duplicating all of the audio and video footage to a seperate, independent location,it should be safe to move your iPhoto library.
    The potential way to circumvent this issues maybe to try and import  iPad and iPhoto video directly into iMovie which would be another solution.
    Good Luck!

  • Issue with Selection Listener when the table has only one row

    Hi All ,
    I have developed a table in which I am using Selection Listener to perform some task when any row is selected.
    It is working fine when I have more than 1 row in the table, but when I have only one row in the table , the selection listener do not call the corresponding method in bean.
    I understand that selection event will be raised only when the row is changed, but in the use case when only one row is there, what should be done to make the selection listener work ?
    In the selection listener I have written code to make the selected row as current row , and perform the required task.
    Please suggest a way out for this situation.
    Thanks in advance.

    Hi,
    try removing this attr from table
    selectedRowKeys="#{bindings.xxx_VO1.collectionModel.selectedRow}"

  • 10g: parallel pipelined table func - distributing DISTINCT data sets

    Hi,
    i want to distribute data records, selected from cursor, via parallel pipelined table function to multiple worker threads for processing and returning result records.
    The tables, where i am selecting data from, are partitioned and subpartitioned.
    All tables share the same partitioning/subpartitioning schema.
    Each table has a column 'Subpartition_Key', which is hashed to a physical subpartition.
    E.g. the Subpartition_Key ranges from 000...999, but we have only 10 physical subpartitions.
    The select of records is done partition-wise - one partition after another (in bulks).
    The parallel running worker threads select more data from other tables for their processing (2nd level select)
    Now my goal is to distribute initial records to the worker threads in a way, that they operate on distinct subpartitions - to decouple the access to resources (for the 2nd level select)
    But i cannot just use 'parallel_enable(partition curStage1 by hash(subpartition_key))' for the distribution.
    hash(subpartition_key) (hashing A) does not match with the hashing B used to assign the physical subpartition for the INSERT into the tables.
    Even when i remodel the hashing B, calculate some SubPartNo(subpartition_key) and use that for 'parallel_enable(partition curStage1 by hash(SubPartNo))' it doesn't work.
    Also 'parallel_enable(partition curStage1 by range(SubPartNo))' doesn't help. The load distribution is unbalanced - some worker threads get data of one subpartition, some of multiple subpartitions, some are idle.
    How can i distribute the records to the worker threads according a given subpartition-schema?
    +[amendment:+
    Actually the hashing for the parallel_enable is counterproductive here - it would be better to have some 'parallel_enable(partition curStage1 by SubPartNo)'.]
    - many thanks!
    best regards,
    Frank
    Edited by: user8704911 on Jan 12, 2012 2:51 AM

    Hello
    A couple of things to note. 1, when you use partition by hash(or range) on 10gr2 and above, there is an additional BUFFER SORT operation vs using partition by ANY. For small datasets this is not necessarily an issue, but the temp space used by this stage can be significant for larger data sets. So be sure to check temp space usage for this process or you could run into problems later.
    | Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                      |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                       |          |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                 | :TQ10001 |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
    |   3 |****BUFFER SORT****                    |          |  8168 |  1722K|            |          |       |       |  Q1,01 | PCWP |            |
    |   4 |     VIEW                              |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   5 |      COLLECTION ITERATOR PICKLER FETCH| TF       |       |       |            |          |       |       |  Q1,01 | PCWP |            |
    |   6 |       PX RECEIVE                      |          |   100 |  4800 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   7 |        PX SEND HASH                   | :TQ10000 |   100 |  4800 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
    |   8 |         PX BLOCK ITERATOR             |          |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    10 |  Q1,00 | PCWC |            |
    |   9 |          TABLE ACCESS FULL            | TEST_TAB |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    20 |  Q1,00 | PCWP |            |
    -----------------------------------------------------------------------------------------------------------------------------------------------It may be in this case that you can use clustering with partition by any to achieve your goal...
    create or replace package test_pkg as
         type Test_Tab_Rec_t is record (
              Tracking_ID                 number(19),
              Partition_Key               date,
              Subpartition_Key            number(3),
              sid                    number
         type Test_Tab_Rec_Tab_t is table of Test_Tab_Rec_t;
         type Test_Tab_Rec_Hash_t is table of Test_Tab_Rec_t index by binary_integer;
         type Test_Tab_Rec_HashHash_t is table of Test_Tab_Rec_Hash_t index by binary_integer;
         type Cur_t is ref cursor return Test_Tab_Rec_t;
         procedure populate;
         procedure report;
         function tf(cur in Cur_t)
         return test_list pipelined
         parallel_enable(partition cur by hash(subpartition_key));
         function tf_any(cur in Cur_t)
         return test_list PIPELINED
        CLUSTER cur BY (Subpartition_Key)
         parallel_enable(partition cur by ANY);   
    end;
    create or replace package body test_pkg as
         procedure populate
         is
              Tracking_ID number(19) := 1;
              Partition_Key date := current_timestamp;
              Subpartition_Key number(3) := 1;
         begin
              dbms_output.put_line(chr(10) || 'populate data into Test_Tab...');
              for Subpartition_Key in 0..99
              loop
                   for ctr in 1..1
                   loop
                        insert into test_tab (tracking_id, partition_key, subpartition_key)
                        values (Tracking_ID, Partition_Key, Subpartition_Key);
                        Tracking_ID := Tracking_ID + 1;
                   end loop;
              end loop;
              dbms_output.put_line('...done (populate data into Test_Tab)');
         end;
         procedure report
         is
              recs Test_Tab_Rec_Tab_t;
         begin
              dbms_output.put_line(chr(10) || 'list data per partition/subpartition...');
              for item in (select partition_name, subpartition_name from user_tab_subpartitions where table_name='TEST_TAB' order by partition_name, subpartition_name)
              loop
                   dbms_output.put_line('partition/subpartition = '  || item.partition_name || '/' || item.subpartition_name || ':');
                   execute immediate 'select * from test_tab SUBPARTITION(' || item.subpartition_name || ')' bulk collect into recs;
                   if recs.count > 0
                   then
                        for i in recs.first..recs.last
                        loop
                             dbms_output.put_line('...' || recs(i).Tracking_ID || ', ' || recs(i).Partition_Key  || ', ' || recs(i).Subpartition_Key);
                        end loop;
                   end if;
              end loop;
              dbms_output.put_line('... done (list data per partition/subpartition)');
         end;
         function tf(cur in Cur_t)
         return test_list pipelined
         parallel_enable(partition cur by hash(subpartition_key))
         is
              sid number;
              input Test_Tab_Rec_t;
              output test_object;
         begin
              select userenv('SID') into sid from dual;
              loop
                   fetch cur into input;
                   exit when cur%notfound;
                   output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
                   pipe row(output);
              end loop;
         end;
         function tf_any(cur in Cur_t)
         return test_list PIPELINED
        CLUSTER cur BY (Subpartition_Key)
         parallel_enable(partition cur by ANY)
         is
              sid number;
              input Test_Tab_Rec_t;
              output test_object;
         begin
              select userenv('SID') into sid from dual;
              loop
                   fetch cur into input;
                   exit when cur%notfound;
                   output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
                   pipe row(output);
              end loop;
         end;
    end;
    XXXX> with parts as (
      2  select --+ materialize
      3      data_object_id,
      4      subobject_name
      5  FROM
      6      user_objects
      7  WHERE
      8      object_name = 'TEST_TAB'
      9  and
    10      object_type = 'TABLE SUBPARTITION'
    11  )
    12  SELECT
    13        COUNT(*),
    14        parts.subobject_name,
    15        target.sid
    16  FROM
    17        parts,
    18        test_tab tt,
    19        test_tab_part_hash target
    20  WHERE
    21        tt.tracking_id = target.tracking_id
    22  and
    23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
    24  GROUP BY
    25        parts.subobject_name,
    26        target.sid
    27  ORDER BY
    28        target.sid,
    29        parts.subobject_name
    30  /
    XXXX> INSERT INTO test_tab_part_hash select * from table(test_pkg.tf(CURSOR(select * from test_tab)))
      2  /
    100 rows created.
    Elapsed: 00:00:00.14
    XXXX>
    XXXX> INSERT INTO test_tab_part_any_cluster select * from table(test_pkg.tf_any(CURSOR(select * from test_tab)))
      2  /
    100 rows created.
    --using partition by hash
    XXXX> with parts as (
      2  select --+ materialize
      3      data_object_id,
      4      subobject_name
      5  FROM
      6      user_objects
      7  WHERE
      8      object_name = 'TEST_TAB'
      9  and
    10      object_type = 'TABLE SUBPARTITION'
    11  )
    12  SELECT
    13        COUNT(*),
    14        parts.subobject_name,
    15        target.sid
    16  FROM
    17        parts,
    18        test_tab tt,
    19        test_tab_part_hash target
    20  WHERE
    21        tt.tracking_id = target.tracking_id
    22  and
    23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
    24  GROUP BY
    25        parts.subobject_name,
    26        target.sid
    27  /
      COUNT(*) SUBOBJECT_NAME                        SID
             3 SYS_SUBP31                           1272
             1 SYS_SUBP32                           1272
             1 SYS_SUBP33                           1272
             3 SYS_SUBP34                           1272
             1 SYS_SUBP36                           1272
             1 SYS_SUBP37                           1272
             3 SYS_SUBP38                           1272
             1 SYS_SUBP39                           1272
             1 SYS_SUBP32                           1280
             2 SYS_SUBP33                           1280
             2 SYS_SUBP34                           1280
             1 SYS_SUBP35                           1280
             2 SYS_SUBP36                           1280
             1 SYS_SUBP37                           1280
             2 SYS_SUBP38                           1280
             1 SYS_SUBP40                           1280
             2 SYS_SUBP33                           1283
             2 SYS_SUBP34                           1283
             2 SYS_SUBP35                           1283
             2 SYS_SUBP36                           1283
             1 SYS_SUBP37                           1283
             1 SYS_SUBP38                           1283
             2 SYS_SUBP39                           1283
             1 SYS_SUBP40                           1283
             1 SYS_SUBP32                           1298
             1 SYS_SUBP34                           1298
             1 SYS_SUBP36                           1298
             2 SYS_SUBP37                           1298
             4 SYS_SUBP38                           1298
             2 SYS_SUBP40                           1298
             1 SYS_SUBP31                           1313
             1 SYS_SUBP33                           1313
             1 SYS_SUBP39                           1313
             1 SYS_SUBP40                           1313
             1 SYS_SUBP32                           1314
             1 SYS_SUBP35                           1314
             1 SYS_SUBP38                           1314
             1 SYS_SUBP40                           1314
             2 SYS_SUBP33                           1381
             1 SYS_SUBP34                           1381
             1 SYS_SUBP35                           1381
             3 SYS_SUBP36                           1381
             3 SYS_SUBP37                           1381
             1 SYS_SUBP38                           1381
             2 SYS_SUBP36                           1531
             1 SYS_SUBP37                           1531
             2 SYS_SUBP38                           1531
             1 SYS_SUBP39                           1531
             1 SYS_SUBP40                           1531
             2 SYS_SUBP33                           1566
             1 SYS_SUBP34                           1566
             1 SYS_SUBP35                           1566
             1 SYS_SUBP37                           1566
             1 SYS_SUBP38                           1566
             2 SYS_SUBP39                           1566
             3 SYS_SUBP40                           1566
             1 SYS_SUBP32                           1567
             3 SYS_SUBP33                           1567
             3 SYS_SUBP35                           1567
             3 SYS_SUBP36                           1567
             1 SYS_SUBP37                           1567
             2 SYS_SUBP38                           1567
    62 rows selected.
    --using partition by any cluster by subpartition_key
    Elapsed: 00:00:00.26
    XXXX> with parts as (
      2  select --+ materialize
      3      data_object_id,
      4      subobject_name
      5  FROM
      6      user_objects
      7  WHERE
      8      object_name = 'TEST_TAB'
      9  and
    10      object_type = 'TABLE SUBPARTITION'
    11  )
    12  SELECT
    13        COUNT(*),
    14        parts.subobject_name,
    15        target.sid
    16  FROM
    17        parts,
    18        test_tab tt,
    19        test_tab_part_any_cluster target
    20  WHERE
    21        tt.tracking_id = target.tracking_id
    22  and
    23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
    24  GROUP BY
    25        parts.subobject_name,
    26        target.sid
    27  ORDER BY
    28        target.sid,
    29        parts.subobject_name
    30  /
      COUNT(*) SUBOBJECT_NAME                        SID
            11 SYS_SUBP37                           1253
            10 SYS_SUBP34                           1268
             4 SYS_SUBP31                           1289
            10 SYS_SUBP40                           1314
             7 SYS_SUBP39                           1367
             9 SYS_SUBP35                           1377
            14 SYS_SUBP36                           1531
             5 SYS_SUBP32                           1572
            13 SYS_SUBP33                           1577
            17 SYS_SUBP38                           1609
    10 rows selected.Bear in mind though that this does require a sort of the incomming dataset but does not require buffering of the output...
    PLAN_TABLE_OUTPUT
    Plan hash value: 2570087774
    | Id  | Operation                            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                     |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                      |          |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                | :TQ10000 |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    VIEW                              |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
    |   4 |     COLLECTION ITERATOR PICKLER FETCH| TF_ANY   |       |       |            |          |       |       |  Q1,00 | PCWP |            |
    |   5 |      SORT ORDER BY                   |          |       |       |            |          |       |       |  Q1,00 | PCWP |            |
    |   6 |       PX BLOCK ITERATOR              |          |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    10 |  Q1,00 | PCWC |            |
    |   7 |        TABLE ACCESS FULL             | TEST_TAB |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    20 |  Q1,00 | PCWP |            |
    ----------------------------------------------------------------------------------------------------------------------------------------------HTH
    David

  • Need to split data from one row into a new row redux

    Hi folks,
    I asked this question about eight months ago (see thread https://discussions.apple.com/message/23961353#23961353) and got an excellent response from forum regular Wayne Contello.  However, I need to perform this operation again and when I attempted it recently, I am now greeted with a yellow warning triangle.  Clicking it shows "This formula can’t reference its own cell, or depend on another formula that references this cell."
    What I'm trying to do is the following:
    I have an excel file that keeps track of members of a social group.  The file places each member "unit" on a single row.  The unit can be a single person or a couple.  Columns are labeled "First1" "Last1" "Hometown1" "B-day1" while the second member of the unit is identified in columns like "First2" "Last2" etc.
    What I'd like to do is duplicate those rows with two people (which I'll do by hand) but have a way of deleting the "xxxx2" data from one row and the "xxxx1" data from the duplicate row.
    Wayne's illustrated solution was to create a blank sheet and enter the following formula in cell A2:
    =OFFSET(Input Data::$A$2, INT((ROW()−2)÷2), COLUMN()−1+IF(MOD(ROW()−2, 2)=0, 0, 4)), which apparently worked fine for me last year but now is sending up an error flag.  When I look at the formula, there is no clue except that which I quoted above.
    Can anyone (or hopefully Wayne) take a second look at this and help me out?  I can't imagine that it's a problem with using the newer version of Numbers, but who knows?  I'm using version 3.2 (1861), which is the "new" Numbers.
    Any help would really be appreciated.
    Thanks!
    -Tod

    Hi Tod,
    The error message "This formula can’t reference its own cell, or depend on another formula that references this cell." may be because your table may be different from the one you were using for Wayne's solution. Numbers has Header Rows, Footer Rows and Header Columns. Such Headers in tables exclude themselves from formulas. Excel does not recognise them as headers. What table are you using now?
    A screen shot of (the top left portion of) your table or a description of what you see under Menu > Table will help.
    Regards,
    Ian.

Maybe you are looking for

  • How do you set the fontSize of a label in AS?

    I am tinkering with a neat little component I found, and I added a few labels to it. In the AS code there are now four labels. I wanted to change the default font size on one of the labels, which I know how to do in mxml but I don't really understand

  • HD Component to Firewire Box???

    I'm looking for a box that will input component video and export it to firewire so I can get it on my mac. Any suggestions?

  • How to achieve this table in biee?

    !http://farm4.static.flickr.com/3443/3406243100_b60e73cfcc_m.jpg! as image show! How can I achieve this table in biee?I just know sum_level1 and sum_level2 can auto generate by biee,but what about rule1 and rule2,is there any way achieve them? Anyone

  • Multi-line command problems

    I am trying to add an motd banner to a configuration template in template center.  When I add it I get the following error on the import: Template import Failed. Unable to parse the XML file.Please ensure that the XML file is complaint to the schema.

  • Ipad 4 restore ios  error 1

    I am looking for ipad 4 update ios with problems via itune error 1. exp july 20,2014 <Edited by Host>