Tables size every week

Dear all,
i got a situation to generate report for all tables in an schema for size in Bytes, and need to caluclate todays size with last week size and put the results in an excel. i have over 30,000 tables in that scheme.
i though of creating a new table with necessary columns (like ... tablename, tablespace, owner, lastweek size, this week size, difference). and running a procedure every monday to load data into table.
please let me now if my apporach is right, and guide me inthis.
Thanks

Hi,
user9284645 wrote:
if i use the LAG/LEAD it will use the data of the same column (i.e the size of the above table, not previous week size of the same table), please correct me if i am wrong.If you only have one size column, then you will want to use the same column from the table to populate both the this_week_size and last_week_siize columns of the output.
For example:
CREATE TABLE     table_size
(       table_name     VARCHAR2 (30)
,     count_date     DATE
,     num_rows     NUMBER (12)
INSERT INTO table_size (table_name, count_date, num_rows) VALUES ('DEPT', DATE '2011-09-26',  4);
INSERT INTO table_size (table_name, count_date, num_rows) VALUES ('EMP',  DATE '2011-09-26', 14);
INSERT INTO table_size (table_name, count_date, num_rows) VALUES ('DEPT', DATE '2011-10-03',  4);
INSERT INTO table_size (table_name, count_date, num_rows) VALUES ('EMP',  DATE '2011-10-03', 16);
SELECT    table_name
,         count_date
,         num_rows               AS this_week_size
,         LAG (num_rows) OVER ( PARTITION BY  table_name
                                       ORDER BY      count_date
                     )          AS last_week_size
FROM      table_size
ORDER BY  table_name
,            count_date
;Output:
TABLE_NAME COUNT_DATE  THIS_WEEK_SIZE LAST_WEEK_SIZE
DEPT       26-Sep-2011              4
DEPT       03-Oct-2011              4              4
EMP        26-Sep-2011             14
EMP        03-Oct-2011             16             14Try adding more data, for the same tables but different weeks. See how the results change to include the new data.
Edited by: Frank Kulash on Oct 3, 2011 4:25 PM

Similar Messages

  • Continuation: need data grouped for every week

    SELECT * FROM aetnah_file_emp_cust_hist
    WHERE pctl_employee_seqnum= 133774This query returns too many rows.
    Now my requirement is that I need to get the most recent contrib_amt from this table for every week based on date column CTL_INS_DTTM.
    This column CTL_INS_DTTM stores data about when a row wass inserted into this table.
    pctl_employee_seqnum CTL_INS_DTTM(MM/DD/YYYY) contrib_amt
    133774 01/01/2009 100
    133774 01/02/2009 200
    133774 01/03/2009 300
    133774 01/04/2009 400
    133774 01/05/2009 500
    133774 01/06/2009
    133774 01/07/2009 700
    133774 01/08/2009 800
    133774 01/10/2009 900
    133774 01/12/2009 1000
    133774 01/13/2009 1100
    133774 01/14/2009 1200
    I will need 52 columns (1 year = 52 weeks) totally and the most recent data for each week.
    Ex:
    Desired output:
    01/07/2009 01/14/2009 01/21/2009 01/28/2009
    700 1200 NULL 200
    SELECT pctl_employee_seqnum, CTL_INS_DTTM, contrib_amt
      FROM (
              SELECT pctl_employee_seqnum, CTL_INS_DTTM, contrib_amt, ROW_NUMBER() OVER(PARTITION BY TO_CHAR(CTL_INS_DTTM, 'WMONYYYY')ORDER BY CTL_INS_DTTM DESC) RNO
                FROM aetnah_file_emp_cust_hist
               WHERE pctl_employee_seqnum= 133774
    WHERE rno = 1This code above is doing that but I need to transpose this row wise data into column wise as mentioned on Desired output.
    Total # of columns:52
    Apologize for opening a new thread.....please help on this transpose issue
    Thank You All

    Hi,
    TO_CHAR (ctl_ins_dttm, 'WMONYYYY') will result in 59 or 60 groups per year, since all months (except February in common years) have (incomplete) 5th weeks.
    If you want 52 equal-sized groups, then TO_CHAR (ctl_ins_dttm, 'WWYYYY') will get you closer. (You'll still have an incomplete week 53.)
    To pivot those rows into one column, you can do something like:
    SELECT  MAX (CASE WHEN TO_CHAR (ctl_ins_dttm, 'WW') = '01' THEN contrib_amt END)   AS week_01
    ,       MAX (CASE WHEN TO_CHAR (ctl_ins_dttm, 'WW') = '02' THEN contrib_amt END)   AS week_02
    ,       MAX (CASE WHEN TO_CHAR (ctl_ins_dttm, 'WW') = '03' THEN contrib_amt END)   AS week_03
    ,       ...If you want data (like "01/07/2009") as the columns headers, then you'll have to use dynamic SQL.

  • I generally back up and restore my iphone every week... and this time i backed it up..and i can see that backup time and file size.. now.. when i try to restore my iphone.. i select the back up and now it is taking forever to restore. help me

    i generally back up and restore my phone every week.. just that i dont have any problem with the network issues. !!
    today at 00:46 1st june.. i backed up my iphone and it was perfect...
    when i try to restore it.. it is taking 15+ hours .. and that time stays like that for almost 3 hours...  it does not happen.
    i even called the apple help center. and they said me to install as a new device... and when i said what wil happen to my messages.. they say .. they wont stay.. and wil be deleted.. and all my apps i need to download again..
    but earlier i din need to do any of it.!!! it worked really fine .!!!
    please help me..!!! i got some important details in this back up.... please help me.!!!

    Once in a while the restore process seems to mess up.  What I don't understand is why you would perform a weekly restore.
    At any rate, if you must restore, perform a device reset (settings>general>reset) and then you will have the opportunity to restore from iCloud.  Restore a device whenever it no longer works correctly, not just as a matter of maintenance.  If it ain't broke, don't fix it.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Table size not reducing after delete

    The table size in dba_segments is not reducing after we delete the data from the table. How can i regain the space after deleting the data from a table.
    Regards,
    Natesh

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • TABLE SIZE NOT DECREASING AFTER DELETION. BLOCKS NOT BEING RE-USED

    Hi ,
    Problem:
    Table size before deletion: 40GB
    Total rows before deletion: over 200000
    Rows deleted=190000 rows
    Table size after deletion is more (as new data was inserted meanwhile).
    Purpose of table:
    This table is a sort of transaction table.
    Whenever an SR is raised by CSR, data gets inserted into this table and is removed when the status is cleared.
    So there is constant insertion and purging will happen on this table.
    We are using ASSM and tablespace is LOCAL.
    This Table has a LONG column also.
    Is this problem because of LONG column ?
    So here there are 2 problems.
    1) INSERTs are not using the space created by DELETE.
    2) New INSERTs are taking much more space then expected ?
    Let me have your suggestion
    Thanks,

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

  • How to reduce table size after deleting data in table

    In one of the environment, we have 300 GB tabe which contains 50 columns. some of the columns are large object columns. This table contains data for the past one year and every month it is growing by 40 gb data. Due to this we have space issues. we would like to reduce the table size by keeping only recent two months data. What are the posiible ways to reduce the table size by keeping only 2 months data. Database version 10.2.04 on rhel 4.

    kumar wrote:
    Finally we dont have down time to do by exp/imp method.You have two problems to address:
    <ul>
    How you get from where you are now to where you want to be
    Figuring out what you want to do when you get there so that you can stay there.
    </ul>
    Technically a simple strategy to "delete all data more than 64 days old" could be perfect - once you've got your table (and lob segments) down to the correct size for two months of data. If you've got the licencing and can use local indexing it might be even better to use (for example) daily partitioning by date.
    To GET to the 2-month data set you need to do something big and nasty - this will probably give you the choice between blocking access for a while and getting the job done relatively quickly (e.g. CTAS) or leaving the system run slowly for a relatively long time while generating huge amounts of redo. (e.g. delete 10 months of data, then shrink / compact). You also have a choice between using NO extra space to get the job done (shrink/compact) or doing something which effectively copies the last two months of data.
    Think about the side effects you're prepared to run with, then we can tell you which options might be most appropriate.
    Regards
    Jonathan Lewis

  • Problem with table size (initial extent)

    Hi,
    I have imported a table from my client's database, which shows the following size parameters as displayed from the user_segments table :-
    bytes : 33628160
    blocks : 4105
    extents : 1
    initial_extent : 33611776
    next_extent : 65536
    The number of rows in the table is 0 (zero). I am wondering how the table size could become so large, while other tables in the schema in the same tablespace have normal initial extent size.
    I then created a tablespace with an initial and next extent of 64k each, and imported the data into the tablespace, after which the table size and the initial extent for the table remained to be 33611776. This is the problem with 4-5 other tables out of a total of 500 tables.
    Of course if i drop and recreate the table, there is no problem, and the initial extent size and the table size becomes 64k, same as per the tablespace.
    Any suggestions? I do not want to drop the tables and recreate them.
    Because of this problem, even an attempt to import a blank database is consuming 2 GB of hard disk space.
    Thanks in advance
    DSG

    I don't think you can stop the extent from being allocated when you import the table.
    Even if you try to let the table inherit storage parameters from the tablespace, it will still allocate as many 64K extents as it needs to get to the 33M size in the table's (imported) storage parameter. I have also seen that when trying to change storage during an import like that, you can look in dba_tables and see the table has an ititial setting of 33M even though when you look in dba_segments you'll see that every extent allocated was in fact 64K. The dba_tables table is getting populated directly from the import and will therefore report the wrong number.
    Perhaps you can import then create table as... to put the tables in a better storage set up. (Letting tables inherit from the tablespace is the best way to go...no fragmentation that way). You might want to get the client to let you revamp the storage since theres no good reason to have one huge extent like that.

  • Lot Size "WB - Weekly Lot Size"

    Dear All,
    I have selected lot size "WB - Weekly Lot Size" for a material.
    The Planned week start day is Monday in planned calender.
    If i run MRP for the above maintained material it is not flowing the requirment and it is not combined on Monday, it is throing on Tuesday, Thursday and all the other day including MOnday also.
    Why is this system is not throwing correctky the requirment on every MOnday.
    Is there settings need be done to get this.
    Your valuable answer will be appreciated.
    Regards,
    Nagaraj S

    Please read SAP online help:
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/f4/7d282c44af11d182b40000e829fbfe/frameset.htm
    Please check your settings in SPRO:
    SPRO > Production > MRP > Planning > Lot-Size Calculation > Check Lot-Sizing Procedure
    ...here you can check "Scheduling" field (V439A-TERBV) of lot size "WB". And you can check also field V439A-LGTER.
    F1 help for the field:
    Scheduling for period lot sizes in short-term area
    This indicator defines the time in the period the system is to create the availability date or the start date for the lot for period lot-sizing procedures.
    The availability date is the date on which the material, including the goods receipt processing time, is available again. In the MRP list, the availability date is equal to the MRP date.
    In in-house production, the start date is the order start date, in external procurement the start date is the release date.
    The following procedures exist:
    Availability date equal to requirement date
    Availability date at period start
    Availability date at period end
    Start date at period start
    Planned order start date at period start and availability date at period end
    With this setting, the in-house processing time in the material master record is ignored.
    Note
    The following is valid in all cases:
    If the start date lies in the past, the system switches to forward scheduling. The following example illustrates how the system procedes:
    The period start is Monday and the period end is Friday. The planned delivery time is 8 days. The indicator for determining the date is set to, "availability date at period end".
    Today is Monday, a requirement is planned on Wednesday. The system has to schedule forwards. As the system works using date determination, today's date is not selected as the start date. Instead, the system determines the next period end, and uses backward scheduling to calculate a start date that does not lie in the past. This means that the end date was planned for the Friday of next Week.
    Edited by: Csaba Szommer on Aug 5, 2010 9:32 AM
    Edited by: Csaba Szommer on Aug 5, 2010 9:35 AM

  • Loading Organizational Plan every week

    Hi Experts-
    The client is trying to sync up the Organizational plan (Org units, positions and relationships) from external system. We currently have a plan but want to delete it and start syncing up with the external system.
    Question I have is-
    1. Is there a way to delete the existing plan (all org units, positions & relations) in one step?
    2. What happens to deleted object IDs? Are they gone from the system? Can I use the same object ID numbers again?
    3. We are planning to do a full load every week. That means we delete what we have and load full again. Is this a good approach? What are the issues with it?
    4. Is there a tcode to find out where (which workflows) the org units are being used?
    Thanks in advance.

    Hi,
    Yes, you may use transaction RE_RHRHDL00 (Delete DB records). You just need to specify the correct evaluation path (depends on which organizational object are being used in your plan), such as ORGCHART, or even you may create a custom evaluation path.
    Yes, after you delete organizational objects they are deleted from the Infotypes tables and you may use their IDs again.
    To my opiion - NO! This is not such a wise approach. First - I assume that your OM and PA systems are integrated and the deletion may cause a mess. Moreover, the OM system is integrated with many other components, such Workflow and this has to be considered as well. There are so many issues to think about, such as: what happens if the deletion of some objects fails.... What happens if not all object will be loaded back to the system...
    Org, Units may be included within the WF schema and as part of a Responsibility Rules (transaction OOCU_RESP).
    My advise to you is to consider different alternative.
    Regards,
    Liran

  • "Convert Text to Table" Size limit issue?

    Alphabetize a List
    I’ve been using this well known work around for years.
    Select your list and in the Menu bar click Format>Table>Convert Text to Table
    Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
    Open “Table Inspector” (Click Table icon at top of Pages document)
    Make sure “table” button is selected, not “format” button
    Choose Sort Ascending from the Edit Rows & Columns pop-up menu
    Finally, click Format>Table>Convert Table to Text.
    A few days ago I added items & my list was 999 items long, ~22 pages.
    Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
    Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
    I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
    I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
    Anyone else have this problem?  It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
    Thanks!
    Pages 08 v 3.03
    OS 10.6.8

    G,
    Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
    A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
    Jerry

  • HT4098 How do I unsubscribe from subscription VIP in the game StarMaker? every week, I removed 66 rubles, while I'm there not even sit, and how do I unsubscribe don't know, please help

    How do I unsubscribe from subscription VIP in the game StarMaker? every week, I removed 66 rubles, while I'm there not even sit, and how do I unsubscribe don't know, please help

    Contact the maker of the app. Apple has nothing to do with subscriptions to third party apps.

  • Entourage User Account Diables Itself Every Week or So

    Running Mail Service on Apple OS X server package 10.4.4 (problem also was on 10.4.3) we have two customers whose log in account becomes disabled every week or so, as though they had made too many erroneous attempts to log in. We reset their account, and they're good to connect for a few more days, then they contact us to say they can't log in, we check the workgroup manager and, sure enough, the log in permission has disabled itself.
    Anyone else seeing this? They are also Mac users, too. They thought they "needed" the features of Entourage. No other email customers using this mail server are complaining at all, although probably no others are using Microsoft Entourage.
    We're pretty suspicious this is an issue between Microsoft Entourage and the Open Directory. Occasionally we see one of these users logged in twice at the same time on the server connected users monitor screen.

    Here is the error I get, if it helps.

  • I don't understandwhy i need to buy a new charger cable for my iphone 5 every week. why can't you make a normal cable, that doesn't stop working? i csn't spend so much money on buying new cablesall the time.

    these cable are so delicate that after one week of use they break down. i can't spend all my money on buying them. moreover, last week it broke down on saturday eve so i was stuck untill tuseday without a phone.
    i have to say that this compan is so rude, and it took me like an hour to sign in and write this complain, and i'm not even sure if it's the right place to wright this sort of things.
    apple's customer service is the wort customer service in the world.

    I don't understand why you need to buy a new one every week. I've had my iPhnoe 5 for almost a year, never had a problem with the cable. In addition, your phone is can't be more than a year old. Therefore, unless you're damaging them, your cable is under warranty and Apple will replace it. Perhaps you should have tried actually availing yourself of Apple's customer service before complaining about it.
    Best of luck.

Maybe you are looking for

  • TSV_TNEW_PAGE_ALLOC_FAILED  error while executing the report

    Hi all, . We are having an info cubeXFIAP_C12 (FIAP: Vendor Line Item Open).In which we have defined 10 Dimensions and 6 Line Item Dimensions. We have created a report  on this with Key date as Input Variable Which gives me From Start to Till Date da

  • Help! Numbers/Punctuation not displaying properly

    For whatever reason, my numbers and punctuation marks are displaying as Greek or Arabic or something. Very frustrating. Does this with every font. Here is a screenshot using Helvetica Neue (the numbers are clearly not right, and the quotation marks a

  • Oracle 11G Certification Matrix

    We are having a case of upgrading Oracle DB 10.2.0.5 to 11.2.0.2 on Siebel CRM 7.8.1.14. Can any one provide pointers on issues and challenges or recommendation related artifacts on this?

  • Supression of warning message

    Hi All I am getting warning message Initial RANGE line for customer exit variable ZCALDAYFISPER is ignored    after the execution of report and before the input variables. I want to supress this mesaage or remove it... How to do it?/ Please help Rega

  • The Macbook Pro 13 Retina late 2013 heat sink

    Did any body notice the unusual design of heat sink in Macbook Pro 13 Retina late 2013? The heat sink only covers CPU but NOT the iris GPU. I wonder why is it designed like this? The GPU can generate much more heat under certain conditions.