Monitoring Tablespaces for critical Total Size

Hi,
I know that from RZ20 and dbacockpit you can monitor the Tablespace Storage. These alerts are for actual physical storage. My question; is there a monitor/report that can warn you of tablespaces that are reaching their limit.
E.G:
In DB2 V8, the max tablespace size for a 16kb page size is 256 GB (Regular Data). Is there a way of monitoring a tablespace, to warn you that it is nearing this limit.
Kind Regards,
Chris Geladaris
SAP NetWeaver Basis

try https://www.sdn.sap.com/irj/sdn/db6?rid=/library/uuid/f03d5fb8-b619-2b10-c383-c6d56872829e
cheers,
-Sunil

Similar Messages

  • BRSPACE create new tablespace for compressed table

    Dear expert,
    I'm plannng to create new tablespace  PSAPCOMP for compressed table by brspace.
    the current total size of the tables that i want to compression is 1T.
    Now i have  2 question about the brspace script
    1.How big size should the PSAPCOMP be , let's say 500G?
    2. i'm confused about the datafile_dir value i should put (please refer to the attachment for current datafile_dir design)
    Could you help to correct/improve the scritpts as following?
    scriptA : brspace -f tscreate -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 --> assign to sapdata4
    repeat scriptB for 20 times
    scriptB : brspace -f tsextend -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 -f1 4 -f2 4 -f3 4 -f4 4 -f5 4 --> extend 25G in one run
    Qestion: is it OK to assign the PSAPCOMP only to "sapdata4"? or i should specify "-f sapdata1 sapdata2 sapdata3 sapdata4" so the table data can distribute among different sapdata_dir?
    Thank you!

    hi Kate,
    some of the questions depend on the "customer" decision.
    is it OK to assign the PSAPCOMP only to "sapdata4"?
    how much space is available? what kind of storage do they have? is totally stripped or not? is there "free" space on the other filesystems?
    1.How big size should the PSAPCOMP be , let's say 500G?
    as I explained to you already, it is expected that applying all compressions you can save 1/2 of the space but you have to test this as it depends on the content of the tables and selectivity of the indexes. Use the oracle package to simulate the savings for the tables you are going to compress.
    do you want the scripts interactive (option -c) or not?
    The SAP Database Guide: Oracle has all possible options so you only have to check them
    brspace -f tscreate -t PSAPCOMP -s 5120 -a yes -i 100 -m 15360 -f sapdata4 -l 2 --> assign to sapdata4
    if you want to create a 500 GB, why you limit the maximum size to 15360?

  • How to find total size of selected items in library

    My library contains much more than my 2GB Nano can hold. I frequently change the content of the Nano with songs and books. How can I keep track of the total size of items I'm selecting for the next Nano update to speed my update? Presently I can only guess what to unselect if the selections are of greater size than my Nano will hold.

    There is no direct way to accumulate total file size when selecting a subset of songs within the Library or a Playlist.
    The only way you can manage this is to create an empty Playlist that you can drag songs into. It will accumulate the total size of the songs as you deposit them into it (as described by StarDeb above).
    If you get to a size greater than the nano will hold, then remove one or more songs from that Playlist.
    There is no 'running-total' of only the selected items like Windows Explorer does. Sorry.

  • How do I calculate the total size of selected folders?

    Hello
    My question is:
    How do I calculate the total size of selected folders?
    Thank you

    It does show the aggregate size.
    Command-Option-I opens the Inspector window. (Or open the File menu and hold the Option key. Get Info turns into Show Inspector.) The Inspector is similar to the Get Info window but it shows aggregate info for multiple selections. It is also dynamic — if you change the selection while the Inspector is open, the info changes to reflect the current selection.

  • How to find total size of RMAN backup files?

    Hi there
    env: Oracle 10gR2, RHEL 64bit
    My client has a production database where rman backups are taken: Level-0 backup every Sunday and Level-1 Monday thru Saturday.
    I have very limited access to this production database because it is being managed by third party and they won't provide me my required info (not sure why). I do not have access to their rman repository. To connect to the database I have to login to an intermediate server and then login to the database server. I have no access to Enterprise Manager. So in short, my access is limited. I want to gather the information on total size of rman backup files - both for a Level-0 and Level-1 backups separately. I understand that this info can be retrieved from rman repository. Are there any data dictionary views/tables where I may get this info?
    Best regards

    Hi,
    Have you searched in  forum check this:https://forums.oracle.com/thread/1097939
    HTH

  • Monitor Recommendations for Use with Captivate

    Anyone have any monitor recommendations for use with Captivate? Size and resolution?

    The formal adobe  recommendation is as posted 1024x768 is fine.
    But, monitors are not that expensive, so for productivity and better work environment I recommend at least one wide screen 22" with resolution  1600X1200 or more  and that supports 1080 for video.
    You need the  wide screen ( or  two monitors) so you can work with all the tools in one view.  

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • Monitor Capture for IPv6

    Trying to capture IPv6 BGP hello traffic with monitor capture feature without success.
    With the monitor capture for IPv6 traffic active and running; If I traceroute (IPv6) from this same router I do see the IPv6 traceroute traffic but NEVER IPv6 BGP hellos.
    NOTE:
    IPv6 traceroute traffic is not shown in the below output because I already cleared the V6BUFF buffer before running the show command.
    My setup:
    monitor capture buffer V6BUFF size 512 max-size 128 linear
    monitor capture point ipv6 cef V6PT mfr0.1 both
    monitor capture point associate V6PT V6BUFF
    monitor capture point start V6PT
    Troubleshooting
    After disassociating monitor capture point V4PT here are the results:
    1941-WAN3#sh mon cap buff all par
    Capture buffer V6BUFF (linear buffer)
    Buffer Size : 524288 bytes, Max Element Size : 128 bytes, Packets : 0
    Allow-nth-pak : 0, Duration : 0 (seconds), Max packets : 0, pps : 0
    Associated Capture Points:
    Name : V6PT, Status : Active
    Configuration:
    monitor capture buffer V6BUFF size 512 max-size 128 linear
    monitor capture point associate V6PT V6BUFF
    Capture buffer V4BUFF (linear buffer)
    Buffer Size : 524288 bytes, Max Element Size : 128 bytes, Packets : 125
    Allow-nth-pak : 0, Duration : 0 (seconds), Max packets : 0, pps : 0
    Associated Capture Points:
    Name : V4PT, Status : Inactive <--- I already disassociated this one
    Configuration:
    monitor capture buffer V4BUFF size 512 max-size 128 linear
    monitor capture point associate V4PT V4BUFF
    Regards
    Frank

    What was the issue and how did you solve it?
    -Deepak

  • Monitor recommendations for PS/CS5.5 print work

    Hopefully this won't be retreading old ground too much, but I'm looking for monitor recommendations for using with CS5.5 for print work in Photoshop and InDesign mainly on a new Mac Mini. I've got a Spyder 3 for calibration already so that side of things is covered - the Apple Thunderbolt display is a bit of range financially, plus I'm not sure about the glossy screen myself, ideally I'd be looking to spend no more than £500 and size wise about 24" would be the ideal size due to the amount of deskspace available to me. TIA for any/all help/recommendations!

    For starters look heere:
    http://www.tftcentral.co.uk/
    The degree of perfection is dependent on your final needs. A studio doing catalog work is going to need a monitor all the way up to top providers like Eizo. for my needs where I output to an Epson 3800 and bigger, I find the dell u2412 to be ideal, and at your price category, maybe 2 will fit the budget.
    The 2412 does not cover the full Adobe RGB gamut, but it does cover sRGB IEC 61966-2-1:1999, the current preference for  online color. I use it because correcting the monitor to match the print for b&w is paramount, and the best monitors for this use LED backlight. The u2412 is admirable for this especially at it's price point.
    I also use a 2nd monitor but it is a smaller 4:3 aspect ratio. It's use is strictly for tools and such.
    The 2412 is a 16:10 aspect ratio and is far better, imo, for graphics work. Also (and maybe especially!) it is very close to the golden mean ratio giving it a wonderful presentation format. Check it out where you see 16:9 and 16:10 side by side.
    When I am working closely with adjustment layers I have enough room on my screen to pull over the specific palette next to the image while tweaking, a big help without losing the ability to fill the screen later if you wish. Simply move the palette back to it's home on the other screen.
    Good Luck!

  • Reading 21576 bytes of data at offset 3 in a buffer of total size 114

    Logs claim it's a 'buffer overflow' - [2015-Jan-05 22:32:41] RDP (0): Exception caught: BufferOverflowException in file '../../gryps/misc/containers/flexbuffer.h' at line 421
    Specifics: 
    attempting to connect to virtual box running on localhost:55985
    OSX: 10.9.5
    RDP: 8.0.12
    VirtualBox: 4.3.20
    [2015-Jan-05 22:32:41] RDP (0): --- BEGIN INTERFACE LIST ---
    [2015-Jan-05 22:32:41] RDP (0): lo0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): lo0 af=30 (AF_INET6) addr=::1 netmask=ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
    [2015-Jan-05 22:32:41] RDP (0): lo0 af=2 (AF_INET) addr=127.0.0.1 netmask=255.0.0.0
    [2015-Jan-05 22:32:41] RDP (0): lo0 af=30 (AF_INET6) addr=fe80::1%lo0 netmask=ffff:ffff:ffff:ffff::
    [2015-Jan-05 22:32:41] RDP (0): gif0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): stf0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): en0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): en0 af=30 (AF_INET6) addr=fe80::2acf:e9ff:fe1a:cc0d%en0 netmask=ffff:ffff:ffff:ffff::
    [2015-Jan-05 22:32:41] RDP (0): en0 af=2 (AF_INET) addr=192.168.10.108 netmask=255.255.255.0
    [2015-Jan-05 22:32:41] RDP (0): en4 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): en5 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): bridge0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): p2p0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): vboxnet0 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): vboxnet1 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): vboxnet2 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): vboxnet3 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): vboxnet4 af=18 addr= netmask=
    [2015-Jan-05 22:32:41] RDP (0): --- END INTERFACE LIST ---
    [2015-Jan-05 22:32:41] RDP (0): correlation id: 10b4ea14-964f-83df-bf55-4ff43fb10000
    [2015-Jan-05 22:32:41] RDP (0): Resolved 'localhost' to '127.0.0.1' using NameResolveMethod_DNS(1)
    [2015-Jan-05 22:32:41] RDP (0): Resolved 'localhost' to '::1' using NameResolveMethod_DNS(1)
    [2015-Jan-05 22:32:41] RDP (0): Resolved 'localhost' to 'fe80:1::1' using NameResolveMethod_DNS(1)
    [2015-Jan-05 22:32:41] RDP (0): Exception caught: BufferOverflowException in file '../../gryps/misc/containers/flexbuffer.h' at line 421
    User Message : Reading 12112 bytes of data at offset 3 in a buffer of total size 82
    [2015-Jan-05 22:32:41] RDP (0): correlation id: 10b4ea14-964f-83df-bf55-4ff43fb10000
    [2015-Jan-05 22:32:41] RDP (0): Resolved 'localhost' to '127.0.0.1' using NameResolveMethod_DNS(1)
    [2015-Jan-05 22:32:41] RDP (0): Resolved 'localhost' to '::1' using NameResolveMethod_DNS(1)
    [2015-Jan-05 22:32:41] RDP (0): Resolved 'localhost' to 'fe80:1::1' using NameResolveMethod_DNS(1)
    [2015-Jan-05 22:32:41] RDP (0): Exception caught: BufferOverflowException in file '../../gryps/misc/containers/flexbuffer.h' at line 421
    User Message : Reading 21576 bytes of data at offset 3 in a buffer of total size 114
    [2015-Jan-05 22:32:41] RDP (0): Protocol state changed to: ProtocolConnectingNetwork(1)
    [2015-Jan-05 22:32:41] RDP (0): Protocol state changed to: ProtocolNegotiatingCredentials(2)
    [2015-Jan-05 22:32:41] RDP (0): Protocol state changed to: ProtocolConnectingNetwork(1)
    [2015-Jan-05 22:32:41] RDP (0): Protocol state changed to: ProtocolNegotiatingCredentials(2)
    [2015-Jan-05 22:32:41] RDP (0): Protocol state changed to: ProtocolDisconnecting(7)
    [2015-Jan-05 22:32:41] RDP (0): Protocol state changed to: ProtocolDisconnected(8)
    [2015-Jan-05 22:32:41] RDP (0): ------ END ACTIVE CONNECTION ------
    ps: the editor for posting on this forum is absolutely the most terrible I had ever seen.

    Hi,
    Please let us know which OS you are trying to connect. Because if you are running Windows 8 and not Windows 8 Pro then you won't be able to connect to your PC from any device. Windows 8 can't host an RDP session.  
    Remote Desktop Client on Mac: FAQ
    http://technet.microsoft.com/en-in/library/dn473006.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Do I need multiple DATA/INDX tablespaces for a DB on a SAN?

    Is there any advantage to creating multiple DATA and INDEX tablespaces for databases that are on a SAN? Database size is probably going to be around 1TB per year.
    Our databases are normally smaller and perform well with a single DATA tablespace and single INDEX tablespace. I would like to see if we need to use a different approach with a larger db and a SAN in place.
    Thanks

    Justin
    As a follow up to my previous post, yes I can confirm that extent allocation is round robin across all files in a TS. The test outline I gave mentioned rowid but as I reconstructed the test that turned out to be irrelevant.
    Here is the results:
    SQL> drop tablespace bubba_ts
    2      INCLUDING CONTENTS AND DATAFILES;
    Tablespace dropped.
    SQL> create SMALLFILE tablespace bubba_ts
    2      datafile 'c:\oradata\edst\bubbatbs_01.dbf'
    3           size 1m
    4           autoextend off,
    5           'c:\oradata\edst\bubbatbs_02.dbf'
    6           size 1m
    7           autoextend off,
    8           'c:\oradata\edst\bubbatbs_03.dbf'
    9           size 1m
    10           autoextend off
    11      EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K;
    Tablespace created.
    SQL> drop user bubba;
    User dropped.
    SQL> create user bubba
    2      identified by bubbapw
    3      default tablespace bubba_ts
    4      temporary tablespace temp
    5      quota unlimited on bubba_ts;
    User created.
    SQL> grant create session,
    2      create table
    3 to bubba;
    Grant succeeded.
    SQL> create table bubba.rowid_test
    2      (
    3      key_col number,
    4      big_col1 char(2000),
    5      big_col2 char(2000),
    6      big_col3 char(2000)
    7      );
    Table created.
    SQL> BEGIN
    2      for i in 1..100 loop
    3      insert into bubba.rowid_test
    4      values (i,
    5           'xxxxx',
    6           'xxxxx',
    7           'xxxxx'
    8           );
    9      end loop;
    10 END;
    11 /
    PL/SQL procedure successfully completed.
    SQL> col file_name for a35
    SQL> select e.extent_id,
    2      e.file_id,
    3      f.file_name
    4 from dba_extents e,
    5      dba_data_files f
    6 where e.owner = 'BUBBA'
    7 and e.segment_name = 'ROWID_TEST'
    8 and e.file_id = f.file_id
    9 order by e.extent_id
    10 ;
    EXTENT_ID FILE_ID FILE_NAME
    0 10 C:\ORADATA\EDST\BUBBATBS_03.DBF
    1 8 C:\ORADATA\EDST\BUBBATBS_01.DBF
    2 9 C:\ORADATA\EDST\BUBBATBS_02.DBF
    3 10 C:\ORADATA\EDST\BUBBATBS_03.DBF
    4 8 C:\ORADATA\EDST\BUBBATBS_01.DBF
    5 9 C:\ORADATA\EDST\BUBBATBS_02.DBF
    6 10 C:\ORADATA\EDST\BUBBATBS_03.DBF
    7 8 C:\ORADATA\EDST\BUBBATBS_01.DBF
    8 9 C:\ORADATA\EDST\BUBBATBS_02.DBF
    9 10 C:\ORADATA\EDST\BUBBATBS_03.DBF
    10 8 C:\ORADATA\EDST\BUBBATBS_01.DBF
    11 9 C:\ORADATA\EDST\BUBBATBS_02.DBF
    12 10 C:\ORADATA\EDST\BUBBATBS_03.DBF
    13 8 C:\ORADATA\EDST\BUBBATBS_01.DBF

  • Recommendations for reducing output size

    Hello,
    My RoboHelp 10 merged project has three SSL outputs: Windows, Linux and Mac versions. Each image in the project therefore has three versions. I have 194 topics, not a huge project, IMO. The generated output for the Windows version is 30.8 MB, which is 1,008 files, about 500 of which are image files, 27 folders. Theoretically, this means that all three versions when published have a total size on disk in the area of 90+ MB. My issue is that I work on the project locally, generate locally, publish to a remote source controlled location and then the network guys copy the published output to a web server. They want me to find out if I can reduce the size of the project so it takes as little time as possible for them to copy it. The copy is the slowest part of the whole thing, including my publish.
    One thing I noticed after the last time I published is that every so often, RoboHelp seems to publish extraneous files/folders to the output that should not be there, as in when compared to a local generated version, they are not in the generated version (which clears the output folder each time). When I cleaned up those extra files after our last publish, the size on disk reduced from 45MB to 31MB. Should I just be watching out for the "extra" unwanted/unnecessary folders and deleting them or is there more I can do within my RoboHelp project to reduce the published size?
    I've been using RoboHelp since RH8 but just jumped right into it without any training or help, apart from what I've received here (thank you all!) so although my projects look good and perform well, any help would be appreciated! At this point, it is taking so long that the network guys want me to look into switching to another tool that one of them knows. I've invested a lot of time and effort into RH, not really interested in another learning curve.
    Thanks!
    Helen

    Hello again
    Hmmm, well, after reading your reply, I'm thinking that you shouldn't get TOO excited just yet. We need to clarify some things.
    For starters, you are saying you have different screen captures that you use for each of the outputs? If so, it would seem that maybe you DO actually need the different outputs. But before we can assume that to be true, we need to know more about the actual information being delivered.
    For example, what does each of the screen captures show?
    What I was thinking may have been overlooked with RoboHelp was that maybe you simply needed to present the SAME information to folks, but they might be using a Windows browser to view it, or a Linux browser to view it or a Mac browser to view it. And if that's the case, there isn't a need for specific images to each operating system.
    Cheers... Rick

  • Dropping tablespace for a partitionned table

    Hi all,
    I have a table partitionned and I want to drop the tablespace for a specific partition. So what happen in the table if I drop one tablespace with a commande drop tablespace tbs including contents and datafiles;
    Does the index unusable in this table?
    Regards

    No, you can not drop a tablespace which contains tables whose partitions are not completely contained in this tablespace.
    db9i :SQL> create tablespace users2 datafile '/u02/oradata/db9i/users201.dbf' size 10M;
    Tablespace created.
    db9i :SQL> CREATE TABLE sales_by_region (item# INTEGER, qty INTEGER,
      2  store_name VARCHAR(30), state_code VARCHAR(2),
      3  sale_date DATE)
      4  STORAGE(INITIAL 10K NEXT 20K) TABLESPACE test
      5  PARTITION BY LIST (state_code)
      6  (
      7  PARTITION region_east
      8  VALUES ('MA','NY','CT','NH','ME','MD','VA','PA','NJ')
      9  STORAGE (INITIAL 20K NEXT 40K PCTINCREASE 50)
    TABLESPACE users,
    10   11  PARTITION region_west
    12  VALUES ('CA','AZ','NM','OR','WA','UT','NV','CO')
    13  PCTFREE 25
    14  TABLESPACE users2,
    15  PARTITION region_unknown
    16  VALUES (DEFAULT)
    17  TABLESPACE test
    18  );
    Table created.
    db9i :SQL> insert into sales_by_region values (1, 100, 'store 1','NY',sysdate);
    1 row created.
    db9i :SQL> insert into sales_by_region values (2, 200, 'store 2','UT',sysdate);
    1 row created.
    db9i :SQL> insert into sales_by_region values (3, 300, 'store 3','ZZ',sysdate);
    1 row created.
    db9i :SQL> commit;
    Commit complete.
    db9i :SQL> select count(*) from  sales_by_region
      2  /
      COUNT(*)
             3
    --insure all data went to the right partition
    db9i :SQL> alter table sales_by_region truncate PARTITION region_east;
    Table truncated.
    db9i :SQL>  select count(*) from  sales_by_region
      2  /
      COUNT(*)
             2
    db9i :SQL> alter table sales_by_region truncate PARTITION region_west;
    Table truncated.
    db9i :SQL> select count(*) from  sales_by_region
      2  /
      COUNT(*)
             1
    db9i :SQL> alter table sales_by_region truncate PARTITION region_unknown;
    Table truncated.
    db9i :SQL>  select count(*) from  sales_by_region
      2  /
      COUNT(*)
             0
    db9i :SQL> insert into sales_by_region values (1, 100, 'store 1','NY',sysdate);
    insert into sales_by_region values (2, 200, 'store 2','UT',sysdate);
    insert into sales_by_region values (3, 300, 'store 3','ZZ',sysdate);
    1 row created.
    db9i :SQL>
    1 row created.
    db9i :SQL>
    1 row created.
    db9i :SQL>
    db9i :SQL>
    db9i :SQL> commit;
    Commit complete.
    db9i :SQL>  select count(*) from  sales_by_region
      2  /
      COUNT(*)
             3
    --now drop one tablespace
    db9i :SQL> drop tablespace users2 including contents and datafiles
      2  /
    drop tablespace users2 including contents and datafiles
    ERROR at line 1:
    ORA-14404: partitioned table contains partitions in a different tablespace
    db9i :SQL> !oerr ora 14404
    14404, 00000, "partitioned table contains partitions in a different tablespace"
    // *Cause: An attempt was made to drop a tablespace which contains tables
    //         whose partitions are not completely contained in this tablespace
    // *Action: find tables with partitions which span the tablespace being
    //          dropped and some other tablespace(s). Drop these tables or move
    //          partitions to a different tablespace
    --move table partition from users2 to users
    db9i :SQL> alter table sales_by_region move partition region_west
    tablespace users;  2
    Table altered.
    --drop tablespace again
    db9i :SQL>  drop tablespace users2 including contents and datafiles
      2  /
    Tablespace dropped.

  • Method_opt = 'FOR ALL COLUMNS SIZE REPEAT'

    Hi all Gurus,
    We have a script to take statitics (see below), it runs every day and All the tables has de "MONITORING" option in a 9.2.0.5 database.
    My question is concern to "method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    So, for a new table with columnns and indexes (with "monitoring" option), will it create statistics or histograms statistics when the script run for the first time on the table??? and then, will continue with/without histograms???
    begin
    DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
    DBMS_STATS.GATHER_DATABASE_STATS(DEGREE=>4,GRANULARITY=>'ALL',CASCADE=>TRUE,method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    DBMS_STATS.GATHER_DATABASE_STATS(estimate_percent =>5,OPTIONS=>'GATHER STALE',method_opt =>'FOR ALL COLUMNS SIZE REPEAT',degree => 4, cascade=>true,STATTAB=>'TABLA_ESTADISTICAS',STATID=>to_char(sysdate,'yymmdd'),STATOWN=>'OPER');
    end;
    Regards,

    Hi all Gurus,
    We have a script to take statitics (see below), it runs every day and All the tables has de "MONITORING" option in a 9.2.0.5 database.
    My question is concern to "method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    So, for a new table with columnns and indexes (with "monitoring" option), will it create statistics or histograms statistics when the script run for the first time on the table??? and then, will continue with/without histograms???
    begin
    DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
    DBMS_STATS.GATHER_DATABASE_STATS(DEGREE=>4,GRANULARITY=>'ALL',CASCADE=>TRUE,method_opt =>'FOR ALL COLUMNS SIZE REPEAT',OPTIONS=>'GATHER EMPTY',estimate_percent =>5);
    DBMS_STATS.GATHER_DATABASE_STATS(estimate_percent =>5,OPTIONS=>'GATHER STALE',method_opt =>'FOR ALL COLUMNS SIZE REPEAT',degree => 4, cascade=>true,STATTAB=>'TABLA_ESTADISTICAS',STATID=>to_char(sysdate,'yymmdd'),STATOWN=>'OPER');
    end;
    Regards,
    {code}
    I have taken following explanation from documentation:
    {code}
    METHOD_OPT - The value controls column statistics collection and histogram creation. It accepts either of the following options, or both in combination:
    FOR ALL [INDEXED | HIDDEN] COLUMNS [size_clause]
    FOR COLUMNS [size clause] column|attribute [size_clause] [,column|attribute [size_clause]...]
    size_clause is defined as size_clause := SIZE {integer | REPEAT | AUTO | SKEWONLY}
    column is defined as column := column_name | (extension)
    - integer : Number of histogram buckets. Must be in the range [1,254].
    - REPEAT : Collects histograms only on the columns that already have histograms.
    - AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
    - SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.
    - column_name : name of a column
    - extension : can be either a column group in the format of (column_name, colume_name [, ...]) or an expression
    The default is FOR ALL COLUMNS SIZE AUTO.
    {code}
    GATHER EMPTY: Gathers statistics on objects which currently have no statistics. Return a list of objects found to have no statistics.
    Reference: http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_stats.htm
    Please go through the link, it will give you more clear picture on DBMS_STATS.
    Regards,
    S.K.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Possible to increase the total size of a software raid set?

    Hi,
    I need to increase the size of a software raid set which is internally in one of the Xserves - originally it had 2x 400G drives.
    I've swapped drives into the raid so they are both now 500G drives.
    diskutil info drive XX (ie raid set) shows that the raid is still a 400G raid volume as expected.
    Question is - can I grow the Raid Total Size to use all the capacity of the member volumes?
    I guess the alternative is to remove the drives from the raid, enable raid on one of them and then add the other drive as a member... but this would mean I would have to take the raid offline.
    The raid sets are not my system disks.
    Any clues?
    TIA
    Campbell
    XServes   Mac OS X (10.4.5)   12 Macs & far too many PCs

    Thanks for your help.
    I rebuilt the array using enableRaid. Worked fine with Raid Volume offline for approx 5 minutes - although I had to degrade the array (and go bare with no mirror) twice in the process.
    In case anyone is interested, this is what the process was:
    1 Swap into Raid array larger drive (say, disk 2) and let array rebuild
    2 Disable file services.
    3 removefromraid disk 2 (leave it mounted).
    4 Unmount the raid set and eject drives (say, disk 1) Remove drive - it's the only original backup.
    5 enableRaid on disk 2
    Providing enableRaid is ok -
    6 Insert a fresh larger size drive in the place of the removed drive (disk 1)
    7 Unmount new drive and addToRaid
    8 (Rebuild array). Providing rebuilding ok -
    9 Start file services again.
    Data, permissions & ACLs were all intact. So now users can fill the rest of the raid up with more MP3s this week <sigh>
    Maybe not the best approach but it worked well for me.
    XServes   Mac OS X (10.4.5)   12 Macs & several hundred too many PCs

Maybe you are looking for

  • The types of the parameter field and parameter field current values are not compatible.----

    HI, I am attempting to set report parameters in my .jsp code via URL parameters. I am able to set the report name, server connection dynamically however when attempting to set the Parameters I receive: com.crystaldecisions.sdk.occa.report.lib.ReportS

  • Parse into array using JDOM! please help

    hey, i've managed to parse an xml document using JDOM i[b] need to be able to parse it and store the text (the value of each node) into an array, and then insert into db etc. the problem is with the recursive function listChildren which calls itself.

  • Multi SSID problem, not work

    hi I bought a Cisco WRVS4400n to use with different vlan and ssid, I use a Cisco Catalyst behind my new WRV, on sigle ssid mode and without Vlans configured, worked perfectly, but, on trunking mode with multi ssid on different vlans ( to use public a

  • Custom delimited CSV Data

    I am exporting a Query Data into CSV using http post action block in BLS. I wanted the data to be delimited by |~ along with comma. Is there any available inline transform that can help me do it?

  • Conflict initialization parameter discription

    there's confusion in the descrition of this parameters "MAX_DISPATCHERS" between 10.2g database reference & 10.2g database admin guide 1. 10.2g database reference says: MAX_DISPATCHERS specifies the maximum number of dispatcher processes allowed to b