Determine size of tables/tablespaces on disk using EM

Hi,
I now have access to 10g Enterprise Manager - newbie.
Usually use user-segments to find size of table/tablepace on disk.
Is there a way to do this uisng Enterprise Mnager?
Thanks

For tables:
Login->Schema->Tables->'Select Schema'->Go-> You will see all the tables -> Click on the table you want to see all the information.
For tablespaces:
Login->Server->Tablespaces->You will all the tabespaces.
Just Explore the EM, you will find everything.

Similar Messages

  • Size of Tables & Tablespaces

    Hi All,
    I have one doubt regarding the size of tables & t/s , For Ex: last week my table(emp) size is 5g, after 7 days it was increased to 2g, how can i know last week size of Table & after 7 days size of Table,
    Ple help me.....

    What version of Oracle? I know that in 10g+ there is a table in the AWR that has tablespace size data but I am not sure about at the segment level. The problem is though you are not allowed to query the AWR tables without possessing an EM Diagnostic and/or Performance Pack license in addition to your regular Oracle license.
    In all cases you can roll your own table size tracking process by copying dba_segments rows that have changed since the last check on a regular basis. You can use cron or dbms_scheduler to run a pl/sql routine to do the comparson saving the new sizing information.
    HTH -- Mark D Powell --

  • Best Way to Determine if a Table is used in a Report?

    Hello,
    I'm looking to modify an existing report developed by someone previously employed.  I believe they are joining unneeded tables and I'd like to remove them. Is there a method that is useful to determine if a table/view is not being used in the report?
    Just looking for any suggestions/advice on how to handle this.
    Thanks,

    Hi Trey,
    Try this.
    In your report just goto field explorer ->database fields.
    It will the command objects/tables that are added in your report.
    If you want see which command object/table used in your report just expand those and check any of the fields belongs to the command objects contains a tick mark .
    If there is a tick mark the fields that means the fields are using in that report.

  • Size for TEMP tablespace

    I don't know if this is a "valid" question. We have users running reports on our production system. The sometimes complain about the temp space being too small (due to their queries crashing when using too much temp space).
    But I also have a feeling that you can keep throwing disks at TEMP space, and that it will never be enough.
    What should the size of a database's TEMP space be - is there a rule of thumb for this ?
    Dirk

    There are several considerations that you should take into account when you try to size your temporary tablespace:
    First, how much sort does you average transaction need and how many concurrent transactions does your system need to support. This number gives you a starting point for the minimum workable size for normal operations.
    Now, how big is the largest table on your system and do you wish to be able to support select * from biggest order by ? Supporting an unqualified select on your largest table may not be required.
    How big is the largest index in your system? It is likely that you need to have enough temp space available to recreate this index in the event of corruption without having to take special action to allocate more space to temp on a temporary basis. But having to add space in the event of a diaster might be acceptable.
    Figure out what the largest sort operation you need to be able to support is and then add enough space to handle the number of concurrent average transactions that would be expected to be on the system at the same time. This is the size you should use for your temporary tablespace.
    It is better to have all the space you will need for any normal and for any maintenance operation available at all time rather than trying to find additional file space to support special maintenance tasks or diaster recovery operations.
    HTH -- Mark D Powell --

  • Determine size of pdf from spool

    All,
    does anyone know how to determine size of pdf from spool.
    i'm using RSPO_RETURN_ABAP_SPOOLJOB to get pdf file, but i want to know its size.
    any idea?

    Assume the PDF data is in internal table gt_messg_att.
    MOVE p_spoolno TO lv_spoolno.
    * CONVERT THE SPOOL TO PDF
      CALL FUNCTION 'CONVERT_OTFSPOOLJOB_2_PDF'
        EXPORTING
          src_spoolid                    = lv_spoolno
          no_dialog                      = lc_no_dialog
    *     DST_DEVICE                     =
    *     PDF_DESTINATION                =
       IMPORTING
         pdf_bytecount                  = lv_bytecount
    *     PDF_SPOOLID                    =
    *     OTF_PAGECOUNT                  =
    *     BTC_JOBNAME                    =
    *     BTC_JOBCOUNT                   =
       TABLES
         pdf                            = gt_pdf_output
       EXCEPTIONS
         err_no_otf_spooljob            = 1
         err_no_spooljob                = 2
         err_no_permission              = 3
         err_conv_not_possible          = 4
         err_bad_dstdevice              = 5
         user_cancelled                 = 6
         err_spoolerror                 = 7
         err_temseerror                 = 8
         err_btcjob_open_failed         = 9
         err_btcjob_submit_failed       = 10
         err_btcjob_close_failed        = 11
         OTHERS                         = 12
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ELSE.
    * Transfer the 132-long strings to 255-long strings
        LOOP AT gt_pdf_output INTO wa_pdf_output.
          TRANSLATE wa_pdf_output USING ' ~'.
          CONCATENATE v_buffer wa_pdf_output INTO v_buffer.
          MODIFY gt_pdf_output FROM wa_pdf_output.
          CLEAR wa_pdf_output.
        ENDLOOP.
    * TO CONVERT THE DATA INTO PDF FORMAT ELSE THE PDF FILE
    * WON'T OPEN & REPORT WOULD GIVE A MESSAGE THAT
    * THE FILE IS DAMAGED & COULD NOT BE OPENED
        TRANSLATE v_buffer USING '~ '.
        CLEAR : wa_messg_att,
                gt_messg_att.
        DO.
          wa_messg_att = v_buffer.
          APPEND wa_messg_att TO gt_messg_att.
          SHIFT v_buffer LEFT BY 255 PLACES.
          IF v_buffer IS INITIAL.
            EXIT.
          ENDIF.
          CLEAR wa_messg_att.
        ENDDO.
      ENDIF.
    * Get size of attachment
      DESCRIBE TABLE gt_messg_att LINES lv_cnt.
      READ TABLE gt_messg_att INTO wa_messg_att INDEX lv_cnt.
      lwa_doc_data-doc_size =  ( lv_cnt - 1 ) * 255 + STRLEN( wa_messg_att ).

  • Which tables store space disk information ?

    Hi ,
    In DB02 you can see space disk evolution of the BW databse by days , weeks , months
    I want to create queries with the same data
    For this I need to create a cube about space disk
    What are the tables where space disk values are stored ?
    If I know the tables I will then create a datasource on them
    Thanks
    Sebastien

    Hi,
    ask your database admin!
    => I think, you can't create a datasource for this. with native sql perhaps (see http://help.sap.com/saphelp_470/helpdata/en/fc/eb3b8b358411d1829f0000e829fbfe/content.htm).
    in INET I found for example: http://database.ittoolbox.com/groups/technical-functional/db2-l/how-to-calculate-the-table-size-in-db2-1691545
    => you need the system tables: syscat.tables, syscat.indexes and syscat.tablespaces
    Sven

  • DB Size and TAble size Estimation

    Hi all
    Please tell mehelp link or spredsheet to estimation DB Size and TAble size Estimate
    Regards

    Please tell mehelp link or spredsheet to estimation
    DB Size and TAble size EstimateWhat size are you looking for?
    1) Estimate of physical disk space used by an existing database schema?
    2) Estimate of how much physical disk space will be required for some arbitrary data in order to create a new database?
    Something else?
    You need to be clear with your requirements

  • BI production  tablespace PDB#ODSD used 100% in DB2 8.0

    Hi Experts,
    In BI production tablespace PDB#ODSD used 100% but in SAPDATA1,2,3 and 4 it shows 8% is free,how I can increase the Size of tablespace  PDB#ODSD.Data from R/3 to BI is not importing because of this tablespace is full I have deleted PSA old data and also run reorganization of all tablespace, automatic storage is enble in DB2.
    How DB2 will create container If tablespace is full.
    Kindly help me.
    Regards,
    Abullais

    Hi,
    Ask Basis people to increase the table space and check is there any other problems with memory.
    Thanks
    Reddy

  • Estimated size of tables for DMU scan

    The DMU estimates the size of tables, and uses this to determine how to parallelise the scans. I have a problem where the estimates are low, meaing that big tables (by bytes in dba_segments) are not parallelised and the scan takes longer than I think is necessary.
    Some of the table have inaccurate statistics, but I can determine any correlation between anything in dba_tables (blocks?) and the size shown in the DMU.
    I would like to know if I could help this my setting something using DBMS_STATS.SET_TABLE_STATS, or anyother method.
    Thanks,
    Ben

    Apologies again for not being clear I thought DMU was a well know abbreviation for Database Migration Assistant for Unicode and would be known on this forum. I've been so involved with it I may have been unclear with my terms.
    I am running the Scan Wizard in the Database Migration Assistant for Unicode v1.1 (DMU). After I've completed the Object Selection, the Scan Details http://docs.oracle.com/cd/E26101_01/doc/doc.11/e26097/ch4scenarios.htm#BACEFAIG shows a list of table with a table size. The DMU then decides to split the scan of each table, based on some function of the table size. I would like to know where this Table Size is calculated from. It is not from dba_segments, as I have a table that is 7GB in DBA segments, that is showing as 0B in the Table Size column. The option to split can be deselected where the DMU has chosen to split the scan into chunks, but not selected where it has not.
    If the Table Size is calculated from the table statistics then I would like to know if it is possible to set this via dbms_stats.set_table_stats, to improve the performance of the scans.
    Ben

  • How can I change the size of table control in table maintenance re-gen?

    Hello Experts,
    I hv created a maintenance view and after generated table maintenance view for it.
    now it adjusts the size of table control in table maintenance generation.
    I want to change the size (width) of table control and again re-generate the table maintenance.
    But when re-generation occurs, table control size is set to initial.
    why it is happening? and wt to do to solve this issue? any user exit?
    I need the changed size of table control even if its re-generated.
    Regards,
    R.Hanks

    Hello Ronny,
    Goto SM30, Enter your table name for which you have maintained your table maintainence generator .
    When the maintainence screen appears for your table name , Goto System->Status->Screen Program name.
    Copy that program name from there.
    Open that module program through SE80,this is the program name of your SM30 screen which appears when we enter our table name in SM30 transaction.
    In SE80,click the layout of the module program name you have entered there.
    Its layout will display you the table control(of SM30) present to enter your your enteries.
    In the change mode you can change its size , savee it and activate that program.
    Now goto to SM30 again and enter your table name, it will show you the changed size of the table control used to take the enteries.
    Note:This changed size is only for your table name and it will remain of its previous size for other table enteries.
    Hope it helps you.
    Thanks Mansi

  • How to fix the size of table data in html

    I want to fix the size of table data in html. ie if i want to insert only 50 char in a <td> field then it contain only 50 character, after that it switched to another line and contain the remaining character. Means <td> field wil not get automatically adjust there size according to data.

    you cant specify how many characters a td cell can have but u can specify the pixel width of a td cell
    <td width=50>
    as long as you have wrap text on, then text will not force the box size over the 50 pixels
    this wont limit it to only 50 chars though, you will probably have to use some javascript to cut the string down to size (unless youre using something like php or asp then that will do the trick too)

  • Can't copy large files to AirPort Disk Using Windows 7

    Hi Everyone,
    I'm positng because I'm unable to copy large files to an AirPort Disk using Windows 7.  I have the following setup.
    AirPort Extreme:
    - Firmware version: 7.6.1
    - Airport disks secured by password
    - No guest access allowed
    - Airport Disk Preferences:
      - Automatically discover airport disks is enabled
      - Show Airport Disks in the system tray is enabled
    External Drive:
    - Western Digital 500 GB MyBook External Drive connected to Airport Extreme thru USB
    - FAT32 Formated
    Computers:
    - 2 Toshiba Proteges, Both with 4GB RAM
    - Both have Windows 7 Home Premium with Service Pack 1, 64-bit version
    - (Sorry we have no MacBooks at home, just iPhones and iPods!)
    Airport Utility:
    - Version  5.6.1
    Airport Disk Preferences Utility:
    - Version  1.5.5.3
    A week ago, I was able to copy over large files (videos <4GB in size) to the drive.  Now, when I try to copy over these large files to the drive, the copy dialog box hangs.  I receive a message stating that Windows is calculating the size, but then it doesn't copy over.  Windows then gives me a message that it is unable to copy to the drive.  When I test the drive connection by copying a small text file over, it copies over quickly and seamlessly.  This problem occurs on both computers.
    I am able to do the following with the drive connection:
    - Read files
    - Change file names
    - Copy files from the drive to my computer
    - See the contents of drive
    I've tried all the combinations of the following with no success:
    - Turning off Windows Search
    - Downgrading the firmware
    - Reconfiguring wireless adapter
    - Rebooting computer
    - Rebooting wireless router
    - Rebooting external drive
    The drive seems to be working fine except for when I want to copy <2GB files over to the drive.  Does anyone have a solution that seems to work?
    Thanks!

    I'm happy to report that, after reformatting the drive to HFS+ (aka Mac OS Extended), there are no problems writing/copying large or small files to the AirDisk drive.  I'm also happy to report that I am copying to the drive at a rate of roughly 4MB/s.
    As I mentioned earlier, the only catch is that I am unable to plug the external drive directly into a Windows computer via the USB port as Windows is unable to read HFS+ formatted drives.  (I can, however, plug the drive into a Mac computer as they can read HFS+ formatted drives.  The only problem is that I don't own any Macs at this time!  LOL!)
    You're probably wondering how I formatted the external drive to HFS+ since I don't own any Macs.  (Windows only allows you to format drives in exFAT, FAT, NTFS and the like.)  That is a good question.  Basically, I downloaded an open source formatting tool called, Gparted, which allows you to format any USB drive to any major file format system.  Since Gparted is an open source tool, it is a little cumbersome to use. Once you figure out how to use it, it is a cinch to format the drive to HFS+.  (Of course, an easier method would be to borrow a friend's mac, plug the drive into their mac, and format it to HFS+ using their mac.)
    For a step-by-step guide on how to solve this problem, see the steps below:
    1) Backup all data from external drive
        a) Fortunately, I have a 1TB ultraportable USB drive which has enough space to backup all data on the drive
    2) Format the external drive to HFS+ using one of the methods below
        a) Plug it into a Mac and format it
         b) Use Gparted (free) to format it
    3) Plug the drive back into the Airport Extreme
    4) Start copying files over to your networked drive!
    I hope this helps you! Cheers!

  • COLLECT: Which table is better to use - STANDARD or SORTED?

    Hello Performance gurus,
    I read this curious fact about COLLECT statement:
    In standard tables that are filled using COLLECT only, the entry is determined by a temporary hash administrator. The workload is independent of the number of entries in the table. The hash administrator is temporary and is generally invalidated when the table is accessed to be changed. If COLLECT statements are specified after an invalidation, a linear search of all table rows is performed. The workload for this search increases in a linear fashion in relation to the number of entries.
    In sorted tables, the entry is determined using a binary search. The workload has a logarithmic relationship to the number of entries in the table.
    In hashed tables, the entry is determined using the hash administrator of the table and is always independent of the number of table entries.
    So does this mean if we're populating a table using COLLECT we should prefer STANDARD over SORTED(due to the HASH administration)? Is there any performance overload while setting up the temporary hash administrator for STANDARD tables?
    Please enlighten me.
    BR,
    Suhas

    Actually I have just noticed I already had created a test program for this somewhere in the past.
    Here are the results:
    STANDARD            1.091.295
    SORTED              3.159.771
    HASHED                994.101
    If for the STANDARD table you somehow destroy the hash administration (in this case it was done in the beginning by APPENDing one record):
    STANDARD            2.255.905
    SORTED                 14.013
    HASHED                  8.022
    (this 2nd execution was with less rows that the 1st execution, otherwise for the standard table it would take too long).
    So this does prove that standard tables can be faster than sorted tables in that special case, but again I would not rely on that.
    Rui Dantas

  • Tablespace level backup using data pump

    Hi,
    Im using 10.2.0.4 on RHEL 4,
    I have one doubt, can we take a tablespace level backup using data pump,
    bt i dnt wnt to use it for transportable tablespace.
    thanks.

    Yes, you can only for the tables in that tablespace only.
    Use the TABLESPACES option to export list of tablespaces.*here all the tables in that tablespaces will be exported*.
    and you must have the EXP_FULL_DATABASE role to use tablespace mode.
    Have a look at this,
    http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10825/dp_export.htm#i1007519
    Thanks
    Edited by: Cj on Dec 12, 2010 11:48 PM

  • To shrink the size of TEMP tablespace

    Dear all,
    There is a databse with RAC, now in OEM the size of TEMP tablespace has been reached at 99.9%. now we want to shrink the size of TEMP tablespace.
    how to we do that???????
    plz help me...........

    Temporary tablespaces usually show they are full, however this space is not actually in use. It is rather allocated. Oracle has evaluated the best way to obtain the most of performance, and he said it is better to allocate once than allocate-deallocate-reallocate extents, so temporary space is not 'released'.
    If you want to feel psychologically more confortable with lower allocated space, you can drop your tablespace (create an interim default temporary tablespace first) and recreate it.
    You can also rebuild temporary datafiles:
    alter tablespace temp add tempfile 'C:\ORACLE\ORACLEXE\ORADATA\XE\TEMP01.dbf' size 32m;
    SQL> select name from v$tempfile;
    NAME
    C:\ORACLE\ORACLEXE\ORADATA\XE\TEMP.DBF
    C:\ORACLE\ORACLEXE\ORADATA\XE\TEMP01.DBF
    SQL> alter database tempfile 'C:\ORACLE\ORACLEXE\ORADATA\XE\TEMP01.DBF' drop including datafiles;
    Database altered.

Maybe you are looking for