Regarding table size

Hi,
I have a table whose script given below :
CREATE TABLE UII_SERVICE_INSTANCE
SERVICE_INSTANCE_ID VARCHAR2(30 BYTE) NOT NULL,
VERSION_NO VARCHAR2(10 BYTE),
STATUS VARCHAR2(20 BYTE),
NW_INSTANCE_ID VARCHAR2(25 BYTE),
CUSTOMER_ID VARCHAR2(25 BYTE),
DIRECTORY_NO VARCHAR2(20 BYTE),
INSTANCE_NAME VARCHAR2(80 BYTE),
PRODUCT_CODE VARCHAR2(255 BYTE),
INSTALL_DATE DATE,
CREATED_BY VARCHAR2(10 BYTE) NOT NULL,
CEASE_DATE DATE,
ODS_CREATE_DATE DATE NOT NULL,
MODIFY_DATE DATE,
LAST_UPDATED_BY VARCHAR2(10 BYTE) NOT NULL,
SHIP_DATE DATE,
ODS_LAST_UPDATE_DATE DATE NOT NULL,
WARRANTY_DATE DATE,
WATERMARK NUMBER(38) NOT NULL,
A_END_LOCATION VARCHAR2(28 BYTE),
B_END_LOCATION VARCHAR2(28 BYTE),
A_END_EXCHANGE_ID VARCHAR2(11 BYTE),
B_END_EXCHANGE_ID VARCHAR2(11 BYTE),
BIT_RATE VARCHAR2(10 BYTE),
MAINTENANCE_CLASS VARCHAR2(17 BYTE),
PRODUCT_TECHNOLOGY VARCHAR2(20 BYTE),
WARRANTY_STATUS VARCHAR2(30 BYTE),
SERVICE_TYPE VARCHAR2(6 BYTE),
SPACE_SITE_PART_KEY NUMBER(10),
SITE_ID VARCHAR2(80 BYTE)
TABLESPACE APPL_DATA
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
NOMONITORING;
CREATE INDEX UII_I_SERVICE_INSTANCE_FK1 ON UII_SERVICE_INSTANCE
(NW_INSTANCE_ID)
NOLOGGING
TABLESPACE APPL_DATA
NOPARALLEL;
CREATE INDEX UII_I_SERVICE_INSTANCE_FK2 ON UII_SERVICE_INSTANCE
(CUSTOMER_ID)
NOLOGGING
TABLESPACE APPL_DATA
NOPARALLEL;
CREATE INDEX UII_I_SERVICE_INSTANCE_FK3 ON UII_SERVICE_INSTANCE
(B_END_EXCHANGE_ID)
NOLOGGING
TABLESPACE APPL_DATA
NOPARALLEL;
CREATE INDEX UII_I_SERVICE_INSTANCE_FK4 ON UII_SERVICE_INSTANCE
(A_END_EXCHANGE_ID)
NOLOGGING
TABLESPACE APPL_DATA
NOPARALLEL;
CREATE INDEX UII_I_SERVICE_INSTANCE_FK5 ON UII_SERVICE_INSTANCE
(SITE_ID)
NOLOGGING
TABLESPACE APPL_DATA
NOPARALLEL;
CREATE UNIQUE INDEX UII_SERVICE_INSTANCE_PK ON UII_SERVICE_INSTANCE
(SERVICE_INSTANCE_ID, VERSION_NO)
NOLOGGING
TABLESPACE APPL_DATA
NOPARALLEL;
ALTER TABLE UII_SERVICE_INSTANCE ADD (
CONSTRAINT UII_SERVICE_INSTANCE_PK
PRIMARY KEY
(SERVICE_INSTANCE_ID, VERSION_NO)
USING INDEX
TABLESPACE APPL_DATA);
Now I have issued the follwoing sqls :
1)TRUNCATE TABLE UII_ODS_OWNER_DEV_21C.UII_SERVICE_INSTANCE DROP STORAGE;
2)ANALYZE TABLE UII_ODS_OWNER_DEV_21C.UII_SERVICE_INSTANCE estimate STATISTICS;
3)analyze table UII_ODS_OWNER_DEV_21C.UII_SERVICE_INSTANCE compute statistics for table for all indexed columns for all indexes;
But still the size of the table from dba_segments showing around 2.8 GB.
SELECT SEGMENT_NAME , SUM(BYTES)/(1024*1024)
FROM DBA_SEGMENTS
WHERE OWNER='UII_ODS_OWNER_DEV_21C'
GROUP BY SEGMENT_NAME
ORDER BY 2 DESC;
SEGMENT_NAME SUM(BYTES)/(1024*1024)
UII_SERVICE_INSTANCE 2816
Can any body please explain the case?
Thanks in advance,
Koushik Chandra

Hi,
Thanks for ur reply.
My database version is also 9.2.0.6
and the output of the sql ;
SELECT initial_extent, next_extent, min_extents, extent_management, allocation_type
FROM dba_tablespaces
WHERE tablespace_name = 'APPL_DATA';
initial_extent next_extent min_extents extent_management allocation_type
65536 1 LOCAL SYSTEM
As I was not aware of the bug, so I have already executed some sqls.
First I have issued the sql below to find the objectid of the table below :
select * from dba_objects WHERE OWNER='UII_ODS_OWNER_DEV_21C' and object_name
like 'UII_SERVICE_INSTANCE%';
then issued the following statement to find the ts_number :
select * from sys.sys_objects where object_id = 58585;
Then I have issued the following sql to find records of that ts_number :
select * from sys.seg$ where ts#=9;
Then I have executed the following block :
begin
update sys.seg$ set blocks=8 where ts#=9 and blocks > 8;
commit;
end;
After this updation the size shown 0.0625 MB from DBA_SEGMENTS.
But I want know will this updation effect badly in my database? As I am directly updating the data dictionary.
Please provide your comments on this issue.
Thanks,
Koushik

Similar Messages

  • Few Queries regarding Message Size in XI

    Hi ,
    I have posted the same question before many times over but did not get a satisfactory reply.
    I have many  file to idoc scenarioes.
    Now the file size is large .It is a CSV or a Tab-delimited file.
    IT can grow upto 100 mb.
    DEV has 4gb of RAM
    QA and PRODUCTION have 16 GB of RAM
    INTEL DUAL CORE PROCESSOR ON EACH box.
    If I want to send a complete 60-100 mb file I run into "LOCK table overflow"
    WE have increased the table size parameter on recieving CRM system.
    recordset parameter is set for all interfaces from 500 to 5000.
    Since I am not able to send a complete file I am breaking it into chunks and sending as I do not have much choice.
    For certain interfaces where I am having large number of fields I am able to send only 2000 records per file for others about 14000 records per file.
    In order to take care of errors like "LOCK table overflow " and to automate the process I have scheduled a report 'RSARFCEX".Otherwise I'll end up putting one file at a time and wait for it to get processed (i.e. create idocs with status 64 in the recieving system.) and this takes a long time.
    I see here on sdn people are able to send a large file in one go.
    Even if I try to split a file using xi ..multimapping..extended interface determination...the works et cetra ...It fails for a file of size 6mb.
    That too after increasing the no. of work processes.
    Besides tried to follow XI tunning parameters ...but there was no change in status-quo.Finally I am using an abap program in xi which is spillting the file (in a matter of few minutes) so that file adapter can pick it up .
    I am surprised by xi system's performance  .I am not using BPM .There is content conversion on sender side.
    Would the experts on the forum please provide a solution.

    Hi deepak
    Did you check this weblog
    /people/alessandro.guarneri/blog/2006/03/05/managing-bulky-flat-messages-with-sap-xi-tunneling-once-again--updated
    and this related thread
    XML file size
    regards
    krishna
    Message was edited by:
            Krishnamoorthy Ramakrishnan

  • Table size not reducing after delete

    The table size in dba_segments is not reducing after we delete the data from the table. How can i regain the space after deleting the data from a table.
    Regards,
    Natesh

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • Enqueue Replication Server - Lock Table Size

    Note : I think I had posted it wrongly under ABAP Development, hence request moderator to kindly delete this post. Thanks
    Dear Experts,
    If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
    If enque server is configured in the same host as CI, it can be checked using
    ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
    As it is a Standalone Enqueue Server, I don't know where to check this value.
    Thanking you in anticipation.
    Best Regards
    L Raghunahth

    Hi
    Raghunath
    Check the following links
    http://help.sap.com/saphelp_nw2004s/helpdata/en/37/a2e3ab344411d3acb00000e83539c3/content.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/5efc11f3893672e10000000a114a6b/content.htm
    Regards
    Bhaskar

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

  • Why Index size is bigger than table size?

    Dear All,
    I found in my database my tables sizes is coming around 30TB (All Tables in Database). and my index size for the same is 60TB. This is data ware housing environment.
    How the index size and table size are differing?
    Why they are differing? why index size is bigger than table size?
    How to manage the size?
    Please give me clear explanation and required information on the above.
    Regards
    Suresh

    There are many reasons why the total space allocated indexes could be larger than the total space allocated to tables. Sometimes it's a mark of good design, sometimes it indicates a problem. In your position your first move is to spend as little time as possible in deciding whether your high-level summary is indicative of a problem, so you need to look at a little more detail.
    As someone else pointed out - are you looking at the sizes because you are running out of space, or because you have a perceived performance problem. If not, then your question is one of curiosity.
    If it's about performance then you should be looking for code (either through statspack/AWR or sql_trace) that is performing badly and use the analysis of that code to help you identify suspect indexes.
    If it's about space, then you need to do some simple investigations aimed at finding a few indexes that can be "shrunk" or dropped. Pointers for this are:
    select
            table_owner, table_name, count(*)
    from
            dba_indexes
    group by
            table_owner, table_name
    having
            count(*) > 2   -- adjust to keep the output short
    order by
            count(*) desc;This tells you which tables have the most indexes - check the sizes of the tables and indexes and then check the index definitions for the larger tables with lots of indexes.
    Second quick check - join dba_tables to dba_indexes by table_name, and report the table blocks and index leaf blocks in desending order of leaf block count. Look for indexes which are very big, and also bigger than their underlying tables. There are special cases (and bugs) that can cause indexes to be much bigger than they need to be ... this report may identify a couple of anomalies that could benefit from an emergency fix followed (possibly) by a strategic fix.
    Regards
    Jonathan Lewis

  • Index size greated then Table Size

    Hi all,
    We are running BI7.0 in our environment.
    One of the tables' index size is much greated than the table itself. The Details are listed below:
    Table Name: RSBERRORLOG
    Total Table Size: 141,795,392  KB
    Total Index Size: 299,300,576 KB
    Index:
    F5: Index Size / Allocated Size: 50%
    Is there any reason that the index should grow more than Table? If so, would Reorganizing index help and if this can be controlled?
    Please letme know on this as I am not very clear on DB much.
    Thanks and Regards,
    Raghavan

    Hi Hari
    Its basically degenerated index.  You can follow the below steps
    1. Delete some entries from RSBERRORLOG.
    BI database growing at 1 Gb per day while no data update on ECC
    2. Re-organize this table from BRSPACE . Now the size of the table would be very less.  I do not remember if this table has a LONG RAW field ( in that case export /import) of this table would be required.   ---Basis job
    3. Delete and recreate Index on this table
    You will gain lot of space.
    I assumed you are on Oracle.
    More information on reoganization  is LINK: [Reorg|TABLE SPACE REORGANIZATION !! QUICK EXPERT INPUTS;
    Anindya
    Regards
    Anindya

  • How to reduce table size after deleting data in table

    In one of the environment, we have 300 GB tabe which contains 50 columns. some of the columns are large object columns. This table contains data for the past one year and every month it is growing by 40 gb data. Due to this we have space issues. we would like to reduce the table size by keeping only recent two months data. What are the posiible ways to reduce the table size by keeping only 2 months data. Database version 10.2.04 on rhel 4.

    kumar wrote:
    Finally we dont have down time to do by exp/imp method.You have two problems to address:
    <ul>
    How you get from where you are now to where you want to be
    Figuring out what you want to do when you get there so that you can stay there.
    </ul>
    Technically a simple strategy to "delete all data more than 64 days old" could be perfect - once you've got your table (and lob segments) down to the correct size for two months of data. If you've got the licencing and can use local indexing it might be even better to use (for example) daily partitioning by date.
    To GET to the 2-month data set you need to do something big and nasty - this will probably give you the choice between blocking access for a while and getting the job done relatively quickly (e.g. CTAS) or leaving the system run slowly for a relatively long time while generating huge amounts of redo. (e.g. delete 10 months of data, then shrink / compact). You also have a choice between using NO extra space to get the job done (shrink/compact) or doing something which effectively copies the last two months of data.
    Think about the side effects you're prepared to run with, then we can tell you which options might be most appropriate.
    Regards
    Jonathan Lewis

  • Lock table size change in instance profile RZ10

    i need your help. I changed the table size from 10000 to 17000 and then to 20000 but still have the same table size as before.i used rz10 to change the parameter enque/table_size.
    the steps i followed are as in all documents i can find.
    1. change parameter value
    2. save it (parameter and instance)
    3. activate it.
    4.on restart instance (i just left it for the offline backup to do this).
    on the 4th step is that enough, because after the system came back i checked the parameter in rz11 and the current value on the parameter is still 10000. (owner entries and granule still 12557 as before)
    am i missing something?
    vinaka
    epeli

    Hi,
    it COULD be that the offline backup did indeed no restart of the instance. From Oracle I know that there is a so called "reconnect-status" where the SAP instance is trying over a defined period of time to log to the database again after the workprocesses lost connection to the database processes. In this timeframe the instance is not to be considered as restarted.
    If you check ST02 you see the point of time where the instance was restarted in reality the last time. If this date is before your offline backup you need to do the restart manually.
    Best regards, Alexander

  • Identify the Cubes where dimension table size exactly 100% of the fact tabl

    Hello,
    We want to Identify the Cubes where dimension table size exactly 100%  of the fact table.
    Is there any table or standard query which can give me this data?
    Regards,
    Shital

    use report (se38) SAP_INFOCUBE_DESIGNS
    M.

  • SQL Server log table sizes

    Our SQL Server 2005 (Idm 7.1.1 (with patch 13 recently applied), running on Win2003 & Appserver 8.2) database has grown to 100GB. The repository was created with the provided create_waveset_tables.sqlserver script.
    In looking at the table sizes, the space hogs are:
    Data space:
        log       7.6G
        logattr   1.8G
        slogattr 10.3G
        syslog   38.3G
    Index space:
        log       4.3G
        logattr   4.3G
        slogattr 26.9G
        syslog    4.2GAs far as usage goes, we have around 20K users, we do a nightly recon against AD, and have 3 daily ActiveSync processes for 3 other attributes sources. So there is alot of potential for heavy duty logging to occur.
    We need to do something before we run out of disk space.
    Is the level of logging tunable somehow?
    If we lh export "default" and "users", then wipe out the repo, reload the init, default and users what will we have lost besides a history of attribute updates?

    Hi,
    I just fired up my old 7.1 environment to have a look at the syslog and slogattr tables. They looked save to delete as I could not find any "magic" rows in there. So I did a shutdown of my appserver and issued
    truncate syslog
    truncate slogattr
    from my sql tool. After restarting the appserver everything is still working nicely.
    The syslog and slogattr tables store technical information about errors. Errors like unable to connect to resource A or Active Sync agains C is not properly configured. It does not store provisioning errors, those go straight to the log/logattr table. So from my point of view it is ok to clean out the syslog and slogattr once in a while.
    But there is one thing which I think is not ok - having so many errors in the first place. Before you truncate your syslog you should run a syslog report to identify some of the problems in the environment.
    Once identified and fixed you should'nt have many new entries in your syslog per day. There will allways be a few, network hickups and the like. But not as many as you seem to have today.
    Regards,
    Patrick

  • Restrictions in Oracle Server (table size, record count ...)

    Hello,
    can somebody tell me if there are any restrictions of table size, record count, file size (apart from operation system restrictions) in Oracle 8.1.7 and 7.3.4?
    Or where can i find information? I couldn4t find anything in the generic documentation.
    Thank you in advance,
    Hubert Gilch
    SEP Logistik AG
    Ziegelstra_e 2, D-83629 Weyarn
    Tel. +49 8020 905-214, Fax +49 8020 905-100
    EMail: [email protected]

    Hello,
    if you are executing a DBMS_AQ.DEQUEUE and then perform a rollback in your code the counter RETRY_COUNT will not go up by 1.
    You are only reversing your own AQ action. This counter will be used only internally to log unsuccessful dequeue actions.
    Kind regards,
    WoG

  • DPM DB table sizes - check

    DPM 2012 SP1 environment.
    I've noticed that our DPMDB size reached 40GB (plus 15GB log), which seemed bit high (personal feeling, not based on any data), so I've checked table sizes and sorted them by the space used and the 2 largest tables are tbl_TE_TaskTrail (~18GB) and tbl_RM_RecoverySource
    (14GB).
    What exactly is contained in these DBs and are their sizes normal (as in proportional to the overall DB size)?

    Hi,
    The SQL job will run very quickly as it just triggers the DPM task.
    lets get some verbose tracing and see if we can find where we're failing under the covers.
    1) Stop and disable the MSDPM service.
    2) Delete or move all the MSDPM*.errlog files
    3) Open regedit and add the following value to enable verbose logging for MSDPM
      Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager
       Name:       MSDPMTraceLogLevel 
       Type: REG_DWORD
       Value:     0x43e
    4) Re-Enable the MSDPM service.
    5) Run the below SQL Script to get the nightly maintenance job that runs at midnight.
    USE DPMDB        
    SELECT SCH.ScheduleId as "SQL JOB ID", SCH.JobDefinitionId,sch.ScheduleXml,JD.Xml
    FROM tbl_JM_JobDefinition JD
    JOIN tbl_SCH_ScheduleDefinition SCH
        ON JD.JobDefinitionId = SCH.JobDefinitionId
    WHERE JD.Type = '282faac6-e3cb-4015-8c6d-4276fcca11d4' -- summary manager
    AND JD.IsDeleted = 0
    AND SCH.IsDeleted = 0
    It will return the SQL Job ID - locate that job under the SQL Server Agent - Jobs.
    6) Run the SQL job manually by Right-clicking and select "Start job at step..."
    7) After 10 minutes, run the following from an administrative command prompt from where the new msdpm*.errlog is located.
       find /i "GarbageCollector.cs" msdpm*.errlog >GarbageCollector.cs.txt
    8) Open the GarbageCollector.cs.txt in notepad and see if you can find the entry where the dbo.tbl_TE_TaskTrail is being cleaned up.  It should show the number of rows effected
    Look for:  DELETE FROM dbo.tbl_TE_TaskTrail WHERE IsGCed = 1
    You can look at other entries and see if there are any errors in there that would prevent garbage collection of that table from occurring.
    9) Remove or rename the MSDPMTraceLogLevel from the registry and restart MSDPM to disable verbose logging.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Handling table size wrt page no

    Hi,
    In my application, I have a table size of 10.
    In case there are 27 records in the table, 1st page shows 1-10 records, next page shows 11-20 and last page shows
    18-27.
    My requirement is to show 21-27 rercords in the last page. Moreover if from last page the user goes back to earlier pages,
    those should display 1-10 and 11-20.
    is this feasible?
    Regards,
    Nikhil

    Hi,
    This is not possible if u set the number of rows at design time. but if you set this through code, u can configure based on the size of the table.
    Eg: if the total records returned are 24, then u can set the visible row count to 8, so that it will display 3 records at a time.
    Hope this helps u.
    Regards,
    Poojith MV

  • DB Query to get table sizes and classification in OIM Schema

    My customer's OIM Production DB size has gone upto 300 gb. They want to know why. Like they want to know "what kind of data" is the reason for such a large size of DB. Is there a way to find out, from OIM schema, what are sizes for each table (like ACT, USR etc) and classify them into User Data, Config Data, Audit Data, Recon Data, etc?
    Any help is very much appreciated in this regard.
    Regards
    Vinay

    You can categorize tables using information from below link:
    http://rajnishbhatia19.blogspot.in/2008/08/oim-tables-descriptions-9011.html
    You can count number of rows for tables using:
    select count(*) from tablename;
    Find major tables whose size is to be calculated and find avg length of a row (by adding attribute lengths defined).
    Finally, calculate the table size using below query:
    select TABLE_NAME, ROUND((AVG_ROW_LEN * NUM_ROWS / 1024), 2) SIZE_KB from USER_TABLES order by TABLE_NAME;
    regards,
    GP

Maybe you are looking for

  • Error when running a script on linux

    Hi, Could you please help me with the below error identification. Am trying to run the mapbuilder.sh script in Red Hat Linux ES 3 and get the below error. I have JDK1.5 and JRE1.6 installed on Linux. the script contains the below statement which when

  • Socket BEA-000438 Unable to load performance pack.....

    We are very familiar with this message and in general know how to resolve this. We are seeing this on WLS 10.3.4 64 bit, on AIX 7.1, on Java 1.6 64 bit SR9_FP1. The weird thing is that if we switch to Java 1.6 SR9 (not FP1) things are fine. First lev

  • Bullet lists

    Hello, When I open a new document I only have these options available for bullet lists: None Bullet Harvard Legal Numbered List However, when I open a Word document in Pages, I have 10 more options. Among them the possibility to create indented lists

  • Adding Secondary cluster

    Dear All, I am already having ISE in distributed deployment as 1)Primary Admin node 2)Primary Monitor node 3)PSN Now i have 3 more ISE boxes & i need to build secondary cluster. 1) Secondary Admin node 2) Secondary Monitor node 3) PSN To do this what

  • Poor panorama blends (straight edges) in CC2014.1

    I have Photoshop CC and CC2014 installed on same Win 8.1 machine. CC produces good panoramas with the expected "jagged" masks between images but, for exactly the same images and panorama options, CC2014.1 produces straight line masks between images a