Segment size

Hi i had a table with 45 lakhs of records. i had a trigger for updation. i had written an update query which updates 25 lakhs records. but i terminated the session while update is going on. i deleted what ever rows got updated in the trigger table. but now when i check in dba_segments my trigger table is showing aroung 2 GB size even though it has only 60,000 records.
what may be the problem? should i use alter tablespace <tablespace_name> coalecse; command?
plz guide me
thanks in advance.

The easiest way to clear up deleted block is using "MOVE" clause.
Syntax: ALTER TABLE tab_name MOVE options
But keep in mind that you need free spaces where the table moves.
For instance:
SQL> select bytes from dba_segments where segment_name = 'REDO_TEST';
BYTES
1048576
SQL> delete from redo_test;
SQL> commit;
SQL> select bytes from dba_segments where segment_name = 'REDO_TEST';
BYTES
1048576
SQL> alter table redo_test move;
SQL> select bytes from dba_segments where segment_name = 'REDO_TEST';
BYTES
65536

Similar Messages

  • Java NIO - TCP segment size abnormally low

    Hi !
    After noticing a weird behaviour on our Linux production server for code that works perfectly on my Windows dev box, I used tcpdump to sniff the packets that are actually sent to our clients.
    The code I use to write the data is as simple as :
    // using NIO - buffer is at most 135 bytes long
    channel.write(buffer);
    if (buffer.hasRemaining()) {
        // this never happens
    }When the buffer is 135 bytes long, this systematically results in two TCP segments being sent : one containing 127 bytes of data, the other one containing 8 bytes of data.
    Our client is an embedded system which is poorly implemented and handles TCP packets as whole application messages, which means that the remaining 8 bytes end up being ignored and the operation fails.
    I googled it a bit, but couldn't find any info about the possible culprit (buffer sizes and default max tcp segment sizes are of course way larger that 127 bytes !)
    Any ideas ?
    Thanks !

    NB the fragmentation could also be happening in any intermediate router.
    All I can suggest is that you set the TCP send buffer size to some multiple of the desired segment size, or maybe just set it very large, like 64k-1, so that you can reduce its effect on segmentation.

  • "Limit Capture/Export File Segment Size To" doesn't work

    I put the limit to 4000 MB before i startet capturing HD-video, but it didn't work. Several of my files are bigger than 4 GB. This is a problem since I use an online back up service that has 5 GB as the maximum file limit. Any suggestion to fix this problem is highly appreciated.

    I believe, although I am not 100% sure, that the "Limit Capture/Export File Segment Size To" does not apply to Log and Transfer, only Log and Capture.
    Since Log and Capture works with tape sources, when the limit is hit, the tape can be stopped, pre-rolled and started again to create another clip
    In the case of Log and Transfer, it is ingesting and transcoding existing files; the clip lengths (and therefore size) are already determined by the source clip.
    If you are working with very lengthy clips, you may want to set in and out points to divide the clips into smaller segments.
    MtD

  • Segment size problem

    I use apache HttpClient upload xml file to a server, and it is https. It works for some files, but for some of them i can not even get response, and use snoop to check networking, i found out, for the bad cases, there is one frame shows "unreassembled packet" and the checksum is INCORRECT, the size of the frame is 1514, the total length in Internet Protocol is 1500, but in the beginning of connection it shows MSS=1460,
    so who can explain the realtionship between these numbers, and where i can configure or control the segment size when uploading, or it is impossible.

    yes, 1514 is right, and the checksum incorrect is beacuse i am using snoop right on my server, the checksum has not been calculated by NIC, it is always 0, now i noticed the problem happens when the segment reachs 1460, and then, this packet is "unreassambled". So, i am wondering where is the problem, SSL, or system configure, or somthing else. By the way, my server is solaris 10 , and the peer, i am not soure, IBM, but i think they may use proxy server.

  • Segment Size when Packaging

    For a while now, when packaging material for our intranet I
    use a custom setting that works best with our data transfer rates.
    I am getting ready to put some courseware on our outside server for
    other employees to access when they are offsite. My question is
    should I use 56 kbs Modem or DSL/Cable Modem? If I use the higher
    setting, how would it impact 56 kbs users? If I use lower settings,
    would 56 kbs users notice any difference? Would cable modem users
    see slower performance when packaged at lower settings?
    Anyone have some experience in testing these settings?
    Thanks!
    Steve

    > Anyone have some experience in testing these settings?
    I always go for the larger segment size, for a number of
    reasons, including:
    - The browser has to make fewer calls for segments - each
    request has a
    certain overhead; latency waiting for the request to be
    answered etc. Take
    the extreme and try 1k segments and see how bad it can be.
    - Larger segments may cause what seem like longer delays on a
    single
    request, but they download more code, so there are fewer
    requests. The user
    perception can be that the course is more usable.
    - If you have any large graphics or other internal content,
    they won't be
    split up anyway, so many segments won't ever be so small as
    16k.
    Steve
    EuroTAAC eLearning 2007
    http://www.eurotaac.com
    Adobe Community Expert: Authorware, Flash Mobile and Devices
    My blog -
    http://stevehoward.blogspot.com/
    Authorware tips -
    http://www.tomorrows-key.com

  • Average segment size of iDoc?

    We want to integrate 2 systems by using iDoc across WAN. So far we know the iDoc volumes and average iDoc segments, for example 6000 SO iDoc/hour and 250 segments of each iDoc. My understanding is that we need to know the average iDoc segment size to performance network sizing.
    I know the size varies from case to case. But is there a BEST PRACTICE value we can use?
    TIA
    James

    Hi Jinquin,
    >>I happen to find something interesting at http://msg-ibexi.wikispaces.com/message/view/home/10780864, "Maximun size of IDOC segment is 1000 bytes", is it true?
    Yes
    SDATA contains segment data and it is 1000 character long check here
    http://help.sap.com/saphelp_47x200/helpdata/en/1a/0e3842539911d1898b0000e8322d00/content.htm
    Also see the last statement in the link
    http://help.sap.com/saphelp_nw04/Helpdata/EN/78/21740f51ce11d189570000e829fbbd/frameset.htm
    Regards
    suraj

  • Why Argument$ segment size is increased

    Hello everyone,
    I have imported dump file having size 800mb and after completing importing given file my database size became more than 10 gb and in that arguement$ segment occupied more than 5 gb.
    so, I would like to know
    1.why it worked in this way ..
    2. As this is system tablespace's segment so ultimately System tablespace increased .. hence is there any way to reduce system TS.
    below query result for ur reference..
    Thanks in adavance
    select owner,segment_name,segment_type ,bytes/(1024*1024) size_m from dba_segments where tablespace_name = 'SYSTEM' and bytes/(1024*1024) > 2 order by size_m desc;
    OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
    SYS ARGUMENT$ TABLE 5431
    SYS I_ARGUMENT1 INDEX 4366
    SYS I_ARGUMENT2 INDEX 2453

    Reducing system tablespace this will help
    Shrink/Reduce System Tablespace!
    System Tablespace have grown too large!
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:153612348067

  • DB Size is reduced (LOB segment size) after Migration from Win To Linux

    Hi Friends,
    We have migrated Oracle 11.2.0.1 Database (64 bit) from Windows (64 Bit) To Linux (64 Bit) using RMAN convert database at target.
    After Migration i could see the size of LOB Segment is very less in targe so as the DB Size.
    Is it baecuas the conversion extracts only the used segments from the source datafilles (or) am i losing some data?
    SQL> DECLARE
      2    db_ready BOOLEAN;
      3  BEGIN
      4    db_ready :=
      5         DBMS_TDB.CHECK_DB('Linux IA (64-bit)',DBMS_TDB.SKIP_READONLY);
      6  END;
      7  /
    PL/SQL procedure successfully completed.
    SQL> DECLARE
      2       external BOOLEAN;
      3  BEGIN
      4      /* value of external is ignored, but with SERVEROUTPUT set to ON
      5       * dbms_tdb.check_external displays report of external objects
      6       * on console */
      7      external := DBMS_TDB.CHECK_EXTERNAL;
      8  END;
      9  /
    PL/SQL procedure successfully completed.
    Regards,
    DB

    >Is it baecuas the conversion extracts only the used segments from the source datafilles (or) am i losing some data?
    I suspect that the source DB has many LOB rows deleted.

  • Raw format segment size

    Hi! In raw format stream data are divided into many files named "segments". How can I configure the size of this files? I want to make them twice smaller.

    You can use MaxFlushSize(Server.xml - Root/Server/ResourceLimits/RecBuffer/MaxFlushSize) configuration in Server.xml to tweak the size of segments which gets created. Minimum size is 32KB.

  • Maximum data segment size for a process

    Hello
    we have an issue in our proxy server with the size of a process.
    the process grow in size and when it reaches 4Gbytes, the process stop with an error that it cannot allocate memory, we check and there's plently of swap left,. all the ulimit settings in solaris 10 are set to 'unlimited'
    is there a way i can see or determine, bypass configure what is the maximum process size. on a 64bit solaris.
    thank you in advance
    Mario G.

    It sounds like you're running a 32-bit app. You need a 64-bit application for it to be able to utilize more than 4 GB of RAM.

  • How to estimate a rollback segment size for a big transaction?

    Oracle 8.1.5. Suppose I have the following statement:
    update table_a set col_a = rownum where col_b = 'codeA'
    The statement affect 900,000 rows. Type of col_a - float(126).

    you can have RBS size same as of your data tablespace
    (eg) you have 10 GB tablespace allocated for data
    then same 10GB more less should be comfortable

  • LOB segment size is 2 times bigger than the real data

    here's an interesting test:
    1. I created a tablespace called "smallblock" with 2K blocksize
    2. I created a table with a CLOB type field and specified the smallblock tablespace as a storage for the LOB segment:
    SCOTT@andrkydb> create table t1 (i int, b clob) lob (b) store as
    t1_lob (chunk 2K disable storage in row tablespace smallblock);
    3. I insert data into the table, using a bit less than 2K of data for the clob type column:
    SCOTT@andrkydb> begin
    2 for i in 1..1000 loop
    3 insert into t1 values (mod(i,5), rpad('*',2000,'*'));
    4 end loop;
    5 end;
    6 /
    4. Now I can see that I have an average of 2000 bytes for each lob item:
    SCOTT@andrkydb> select avg(dbms_lob.getlength(b)) from t1;
    AVG(DBMS_LOB.GETLENGTH(B))
    2000
    and that all together they take up:
    SCOTT@andrkydb> select sum(dbms_lob.getlength(b)) from t1;
    SUM(DBMS_LOB.GETLENGTH(B))
    2000000
    But when I take a look at how much is the LOB segment actually taking, I get a result, which is being a total mystery to me:
    SCOTT@andrkydb> select bytes from dba_segments where segment_name = 'T1_LOB';
    BYTES
    5242880
    What am I missing? Why is LOB segment is being ~2 times bigger than it is required by the data?
    I am on 10.2.0.3 EE, Solaris 5.10 sparc 64bit.
    Message was edited by:
    Andrei Kübar

    thanks for the link, it is good to know such thing is possible. Although I don't really see how can it help me..
    But you know, you were right regarding the smaller data amounts. I have tested with 1800 bytes of data and in this case it does fit just right.
    But this means that there is 248 bytes wasted (from my, as developer, point of view) per block! But if there is such an overhead, then I must be able to estimate it when designing the data structures. And I don't see anywhere in the docs a single word about such thing.
    Moreover, if you use NCLOB type, then only 990 bytes fits into a single 2K chunk. So the overhead might become really huge when you go over to gigabyte amounts...
    I have a LOB segment for a nclob type field in a production database, which is 5GB large and it contains only 2,2GB of real data. There is no "deleted" rows in it, I know because I have rebuilt it. So this looks like a total waste of disk space... I must say, I'm quite disappointed with this.
    - Andrei

  • Avg_row_len Vs. Segment size

    Hi
    In our Production DB (size 650 GB), I am closely monitoring the
    space growth pattern for the past one year.
    Last week I observed that in some of the tables
    avg_row_len*num_rows = 50 MB
    and sum(bytes)/1024/1024 from user_segment where
    segment_name ='<table_name>'
    This gives app 600 MB.
    Now my questions are :
    1) What does this mean ?
    2) Is there lot of free space available ?
    3) Will this affect when going for full table scan ?
    4) How to make it smaller considering the
    actual size of the table ?
    Please give me the solid solution which can be
    practically implemented in production DB (No
    academic, theoritical answers please)
    Regards
    S.D. RAVICHANDRAN
    Verizon.

    Hi Guys
    I got answer after raising SR.
    Oracle advised to use move command to optimize the space.
    The bottomline is we should have enuf space. Say size of the
    table+10% is sufficient.
    Hope this will help other DBAs.
    S.D. RAVICHANDRAN
    Verizon.

  • Truncate log and device/segment size

    If I have a ASE database, it's log is never be truncated, the log become very large. I do have backup every day, but backup did not reduce the size of log.
    If I turncate the log today. the space on device for log  will be released. Is it possible to reduce the size of the device for log on live production?
    Also if log never be truncated, does it impact the ASE performance?

    Hi Kent,
    Refer to the discussion in thread http://scn.sap.com/thread/3467480
    Hope this helps.
    Regards,
    Deepak Kori

  • Collect segment size history in 10g

    Hi,
    In order to know how much each segments and tablespace growth each month ,
    i have a job that queering dba_segments and dba_data_files and save the results in a database tables.
    Does 10g collect this information automatically , or should i continue collect this kind on statistics by my self ?
    Thanks.

    You can try to use the DBMS_SPACE package. For example:
    SQL> column timepoint format a30
    SQL> select * from
      2  table(dbms_space.OBJECT_GROWTH_TREND(
      3          'SYS', 'OBJ$', 'TABLE', null, to_timestamp('14-JUL-2008','DD-MON-YYYY'), null, numtodsinterval(30, 'DAY') ));
    TIMEPOINT                      SPACE_USAGE SPACE_ALLOC QUALITY
    14-JUL-08 12.00.00.000000 AM       1318466     2097152 INTERPOLATED
    13-AUG-08 12.00.00.000000 AM       1318466     2097152 GOOD

Maybe you are looking for

  • Need to upload using lsmw

    hi, LSMW it's only navigation part for direct/batch input method. So why do abapers do the LSMW to upload? regards raghu

  • Multiple Query / Fact Table question

    I'm a bit new to obiee, and am trying to accomplish something that should be pretty easy. Here is my basic table structure: Month table (dimension) Month Key (primary Key) aTable (fact table) MonthKey NameA totalA bTable (fact table) MonthKey totalB

  • Record set per message

    hi if suppose this is my input structure <car> <color = red/> <manufavture = 200/> <car/> i want to know the entire structure above is 1 recordset per message,please explain the difference between recordset per message and a mesage.

  • I continue to get his error about some script that stopped working Script: chrome://asktoolbar/content/core.js:113

    Script: chrome://asktoolbar/content/core.js:113 this error contiunes to show up while using firefox and has choices of Stop the script or wait to see if it comes back. No matter which one I click it will kick me off the site or freeze up that browser

  • Ipod touch 2g stuck on boot logo

    My ipod touch 2g (that i got from a friend, says he got it wet from the rain) when turned on stays on the apple logo for aprox 10-15 minutes, goes white screen for about a second then starts over repeating the process. I have can put it into recovery