Bridge table size on WGB

Hi
I've set up an old 1121 access point as a WGB on our unified (i.e. lightweight) wireless network. It works fine, but the VLAN that
the wireless side connects to has between 200 and 300 clients at any one time. The bridge table size seems to be fixed at 300 entries, and I'm concerned that at some point this may overflow. I've put this in the wireless config:
int dot110
no bridge-group 1 source-learning
bridge-group 1 block-unknown-source
to try to keep the size of the table down, but it seems to make no difference. Is my only option to move the WGB to a VLAN which has fewer clients?
Thanks
Max Caines
University of Wolverhampton

Hi , I fetched this for you:
Total number of forwarding database elements in  the system. The memory to hold bridge entries is allocated in blocks of  memory sufficient to hold 300 individual entries. When the number of  free entries falls below 25, another block of memory sufficient to hold  another 300 entries is allocated. Thus, the total number of forwarding  elements in the system is expanded dynamically, as needed, limited by  the amount of free memory in the router.
Now this docu if for router, but since both AP and Routers run IOS Code then the logic should still be the same.
So either:
1- Keep me posted on what happens when you indeed reach 300 on current table to see if AP will allocate extra 300 when 25 threshold is reached
or
2- move wgb to less busy vlan
Thanks
Serge

Similar Messages

  • "Convert Text to Table" Size limit issue?

    Alphabetize a List
    I’ve been using this well known work around for years.
    Select your list and in the Menu bar click Format>Table>Convert Text to Table
    Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
    Open “Table Inspector” (Click Table icon at top of Pages document)
    Make sure “table” button is selected, not “format” button
    Choose Sort Ascending from the Edit Rows & Columns pop-up menu
    Finally, click Format>Table>Convert Table to Text.
    A few days ago I added items & my list was 999 items long, ~22 pages.
    Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
    Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
    I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
    I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
    Anyone else have this problem?  It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
    Thanks!
    Pages 08 v 3.03
    OS 10.6.8

    G,
    Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
    A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
    Jerry

  • 3-1674105521 Multiple Paths error while using Bridge Table

    https://support.us.oracle.com/oip/faces/secure/srm/srview/SRViewStandalone.jspx?sr=3-1674105521
    Customer Smiths Medical International Limited
    Description: Multiple Paths error while using Bridge Table
    1. I have a urgent customer encounterd a design issue and customer was trying to add 3 logical joins between SDI_GPOUP_MEMBERSHIP and these 3 tables (FACT_HOSPITAL_FINANCE_DTLS, FACT_HOSPITAL_BEDS_UTILZN and FACT_HOSPITAL_ATRIBUTES)
    2. They found found out by adding these 3 joins, they ended with circular error.
    [nQSError: 15001] Could not load navigation space for subject area GXODS.
    [nQSError: 15009] Multiple paths exist to table DIM_SDI_CUSTOMER_DEMOGRAPHICS. Circular logical schemas are not supported.
    In response to this circular error, the developer was able to bypass the error using aliases, but this is not desired by client.
    3. They want to know how to avoid this error totally without using alias table and suggest a way to resolve the circular join(Multiple Path) error.
    Appreciated if someone can give some pointer or suggestion as the customer is in stiff deadline.
    Thanks
    Teik

    The strange thing compared to your output is that I get an error when I have table prefix in the query block:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY=SYSADM.TMP1:"WHERE TMP1.A = 2" REMAP_TABLE=SYSADM.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "SYSADM"."TMP3" failed to load/unload and is being skipped due to error:
    ORA-38500: Unsupported operation: Oracle XML DB not present
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Fri Dec 13 10:39:11 2013 elapsed 0 00:00:03
    And if I remove it, it works:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY=SYSADM.TMP1:"WHERE A = 2" REMAP_TABLE=SYSADM.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "SYSADM"."TMP3"                             5.406 KB       1 out of 2 rows
    Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Fri Dec 13 10:36:50 2013 elapsed 0 00:00:01
    Nicolas.
    PS: as you can see, I'm on 11.2.0.4, I do not have 11.2.0.1 that you seem to use.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • How to define an aggregation rule for a dimension based on bridge table?

    Hello,
    I need a solution for aggregating data correctly when using a dimension based on a set of dimensione tables containing a bridge table. Please find below a description of my business case and the OBIEE model which I’ve created thus far.
    Business Case
    The company involved wants to report on the number of support cases, the different types of actions that were taken and the people involved in those actions. One support case will undergo a number of actions (called ‘handelingen’) until it is closed. For each action at least one person is involved performing a specific role, but there can also be multiple persons involved with 1 action, each performing a different role for that action. This is the N : N part of the model.
    The problem that I face is visible in the two pictures below:
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/sample.png
    As long as I don’t include anything from the Dimension Meelezer in my report, I get the correct number of handelingen (7). When I include the person (called ‘Meelezer’), the measuere per action is multiplied by the number of persons/roles involved with that action.
    When I changed the Aggregation rule in the report column #Handelingen to ‘Server Complex Aggregate’ I do get the correct endtotal:
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/sample2.png
    I believe it should be possible to define in the repository a different aggregation rule for individual dimensions, but I’ve not been able to achieve this.
    Explained below is what I have created in my Physical and Business Model & Mapping layers:
    The Physical Model is built like this:
    (This is just a small part of a much larger physical model, but I’ve only included the most relevant tables)
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/PhysicalDiagram-1.png
    The Fact table (ALS Feit Zaakverloop) contains FK’s for the action (FK_HANDELING, joined to ALS Dim Handeling), the date the action took place (FK_DATUM_ZAAKVERLOOP, joined to ALS Dim Datum Zaakverloop) and the uniqe group of people involved (FK_MEELEZERS, joined to ALS Groep Meelezers) and a measure column (SUM_HANDELINGEN) populated with the value ‘1’ for each row.
    The Bridge table (ALS Brug Meelezer/Reden Meelezen) contains three FK’s: FK_GR_MEELEZERS (joined to ALS Groep Meelezers), FK_MEELEZER (joined to ALS Dim Functionaris) and FK_REDEN_MEELEZEN (joined to ALS Dim Reden Meelezen).
    The Business Model
    In the business model, the four physical tables for the N:N relation have been combined into one logical dimension table.
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/BusinessModel-1.png
    DIM Meelezer contains one LTS in which the four physical tables have been combined:
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/LTS1.png
    And all the required locical columns have been created:
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/LTS2.png
    DIM Meelezer has also been identified as a bridge table and a Business Key has been defined on a combination of the FK’s in the bridge table and business codes of the two dimension tables.
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/BMDIM.png
    Next a hierachy was created for Dim Meelezer:
    http://i84.photobucket.com/albums/k24/The_Dutchman_2006/OBIEE/Hier.png
    In Feit Zaakverloop, a measurement called ‘# Handelingen’ was created using SUM_HANDELINGEN, with an aggregation rule of SUM.
    In the LTS of both the DIM Meelezer and Feit Zaakverloop, the Logical Content Levels have both been set to: LVL Detail – Meelezer.
    Please provide suggestions that will NOT require changes to the physical datamodel as they would require too much time to achieve (or at leats would not be ready before my deadline.
    Thanks!
    Edited by: The_Dutchman on Dec 13, 2011 11:43 AM

    Hmm, no replies yet...
    Am I in 'uncharted territory' with this issue?

  • Bridge Table between two fact tables

    Hello everybody,
    From what I have read on the BI Administration tool help and on this forum, bridge tables are used to define many-to-many relations between dimension sand fact tables. Is it possible to have a bridge table defining a many-to-many relation between two fact tables?
    Here is my senario:
    1. We have a fact table called fact_Orders describing orders for some products.
    2. We have a fact table called fact_Sales describing sales og these products.
    3. We have a table describing the transformation from order lines to sales lines which is a many-to-many relation, because it is possible to transform an order in more than two steps.
    I was thinking of connecting the two fact tables with a bridge table.
    If bridge tables are inappropriate for this case, what could be a better model for my senario?
    Thanks for your time.

    Hi,
    Well a conformed dimension is a bridge table between two facts, so not sure why you need anything else. If there is a one to many from D1 to F1 and a one to many from D1 to F2 then effectively there is a many to many join from F1 to F2 through the D1 dimension.
    Sounds to me like all you need is an order dimension table, rows in the orders fact table will join to this dimension and so will rows in the sales fact table. You can then do calculations like number of sales per order, total sales revenue per order, # of order items per order etc etc.
    Regards,
    Matt

  • Change table size and headers in type def cluster

    Is is possible to change a table size and headers that is inside a type def cluster?
    I have a vi that loads test parameters from a csv file. The original program used an AC load so there was a column for power factor. I now have to convert this same program to be used with a DC load, so there is no power factor column.
    I have modified to vi to adjust the "test table" dynamically based on the input file. But the "test table" in the cluster does not update it's size or column headers.
    The "test table" in the cluster is used through out the main program to set the values for each test step and display the current step by highlighting the row.
    Attachments:
    Load Test Parms.JPG ‏199 KB
    Table Cluster.JPG ‏122 KB

    Nevermind, I figured it out...
    I was doing it wrong from the start, in an effort to save time writing the original program I simply copied the "test table" to by type def cluster.  This worked but was not really as universal as I thought it would be, as the table was now engraved in stone since the cluster is a type def.
    I should not have done that, but rather used an array in the cluster and only used the table in the top level VI where it's displayed on the screen.

  • Alternative of Bridge table in data Modelling.

    Hello Gurus,
    while doing the data modeling, we found one place where we have Many to Many joins between One Fact and 3 Dim.
    where in Dim., we mostly have only one attribute/ Dim, which relates Many to Many with Fact.
    so as in obiee we have to build the bridge table to take care of the issue.
    is there any alternative method of data modeling that can eliminate the Bridge table itself?
    I was thinking to add the dim attribute in fact itself. though it's with diff grain it should work??

    If you really have a many-to-many relationship from fact to dimension, which attribute value (which of the many) would you put on the fact?
    What is the issue you are having with a bridge table?

  • Weblogic Bridge Batch Size setting ?

    What is the per-requisite to set JMS bridge batch size in weblogic 10.3.3
    Customer setting in production:
    QoS --> Exactly once
    Asynchronous Mode Enabled --> True
    Batch Size --> 1
    From weblogic console
    "A messaging bridge instance provides transaction semantics when the QOS is Exactly-once. This envelops a received message and sends it within a user transaction (XA/JTA)."
    From documentation
    Changing the Batch Size
    When the Asynchronous Mode Enabled attribute is set to false and the quality of service is Exactly-once, the Batch Size attribute can be used to reduce the number of transaction commits by increasing the number of messages per transaction (batch). The best batch size for a bridge instance depends on the combination of JMS providers used, the hardware, operating system, and other factors in the application environment. See “Configure transaction properties” in Administration Console Online Help.
    Can batch size be set when Asynchronous Mode Enabled --> True ? If yes, what can be the optimal batch size in a production environment ?
    Right now application works fines for non-batch orders from CRM. When customers submit batch of 500 orders during specific hours of the day, lot of messages get queued up affecting the order completion rate.
    Thanks,
    Murali
    Edited by: murali_ora123 on May 22, 2013 6:28 AM
    Edited by: murali_ora123 on May 22, 2013 6:34 AM

    BatchSize does not take effect in async mode.
    What is the provider of your source destination? If it is not WLS JMS destination, your bridge may actually work in sync mode although you have configured it to work in async mode. In order to work in async mode for exactly-once QOS, a WLS proprietary feature is needed. We should see a log message in your server log file when the switch to the sync mode happens.
    If your bridge is indeed work in sync mode, you need to tune your batch size and batch interval to find the best performance for your application load condition. BatchInterval helps you send a batch before the batch is filled with the number of messages defined by batch size.
    Hope this helps.

  • Table size not reducing after delete

    The table size in dba_segments is not reducing after we delete the data from the table. How can i regain the space after deleting the data from a table.
    Regards,
    Natesh

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • TABLE SIZE NOT DECREASING AFTER DELETION. BLOCKS NOT BEING RE-USED

    Hi ,
    Problem:
    Table size before deletion: 40GB
    Total rows before deletion: over 200000
    Rows deleted=190000 rows
    Table size after deletion is more (as new data was inserted meanwhile).
    Purpose of table:
    This table is a sort of transaction table.
    Whenever an SR is raised by CSR, data gets inserted into this table and is removed when the status is cleared.
    So there is constant insertion and purging will happen on this table.
    We are using ASSM and tablespace is LOCAL.
    This Table has a LONG column also.
    Is this problem because of LONG column ?
    So here there are 2 problems.
    1) INSERTs are not using the space created by DELETE.
    2) New INSERTs are taking much more space then expected ?
    Let me have your suggestion
    Thanks,

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • OBI EE 10g: Bridge tables and Based on Dimensions Aggregation

    hi experts,
    i am working on OBI EE 10 g (10.1.3.4)
    The BM&M layer consist of:
    1) Logical fact table "Sale_Indicators"
    Fields: SALE_ID (PK, FK),
    D1_ID (FK),
    D2_ID (FK),
    Indicator1 (measure, level of granularity: SALE_ID),
    Indicator2 (measure, level of granularity: SALE_ID),
    Indicator3 (measure, level of granularity: SALE_ID)
    2) Logical dimension table
    "Sales" (PK: SALE_ID),
    "D1" (PK: D1_ID),
    "D2" (PK: D2_ID),
    "Customers" (PK: SALE_ID, CUST_ID) - bridge table!
    "Products" (PK: SALE_ID, PROD_ID) - bridge table!
    3) Dimensions: SalesDim, D1Dim, D2Dim, CustomersDim, ProductsDim
    If fact table is joined with bridge table, the number of rows in fact table is multiplied, for example:
    D1_ID | SALE_ID | CUST_ID | Indicator1
    777 | 1 | 14 | 10
    777 | 1 | 17 | 10
    777 | 2 | 15 | 12
    888 | 3 | 16 | 20
    888 | 3 | 17 | 20
    888 | 4 | 19 | 30
    I need to get report:
    D1_ID | Indicator1 (SUM)
    777 | 22
    888 | 50
    and with filter by customer, for example (CUST_ID = 17):
    D1_ID | Indicator1 (SUM)
    777 | 10
    888 | 20
    i am trying to use "based on dimension" aggregation, for example (Indicator1):
    Dimension Formula
    CustomersDim MIN
    ProductsDim MIN
    Others SUM
    The generated physical SQL performs joining EVERY dimension to the fact table, even though they are not included in the final result set.
    Is there any way to tweak logical or physical model in order to eliminate excessive joins?
    Thanks in advance!
    Edited by: 859688 on 31.10.2011 4:04
    Edited by: 859688 on 31.10.2011 4:06
    Edited by: 859688 on 31.10.2011 4:08

    I found this text on the help, but I didn't understand, because when I check the "based on dimensions" check box, I can choose aggregation rules for each dimension, not only the time dimension.
    Also, I found in the help menu:
    "In the Aggregation tab, select the Based on dimensions check box.
    The Browse dialog box automatically opens.
    In the Browse dialog box, click New, select a dimension over which you want to aggregate, and then click OK.
    In the Aggregation tab, from the Formula drop-down list, select a rule."
    I did the same steps suggested by the text above, but it didn't work.

  • Give me the sql query which calculte the table size in oracle 10g ecc 6.0

    Hi expert,
    Please  give me the sql query which calculte the table size in oracle 10g ecc 6.0.
    Regards

    Orkun Gedik wrote:
    select segment_name, sum(bytes)/(1024*1024) from dba_segments where segment_name = '<TABLE_NAME>' group by segment_name;
    Hi,
    This delivers possibly wrong data in MCOD installations.
    Depending on Oracle Version and Patchlevel dba_segments does not always have the correct data,
    at any time esp. for indexes right after being rebuild parallel (Even in DB02 because it is using USER_SEGMENTS).
    Takes a day to get the data back in line (never found out, who did the correction at night, could be RSCOLL00 ?).
    Use above statement with "OWNER = " in WHERE for MCOD or connect as schema owner and use USER_SEGMENTS.
    Use with
    segment_name LIKE '<TABLE_NAME>%'
    if you like to see the related indexes as well.
    For partitioned objects, a join from dba_tables / dba_indexes to dba_tab_partitions/dba_ind_partitions to dba_segments
    might be needed, esp. for hash partitioned tables, depending on how they have been created ( partition names SYS_xxxx).
    Volker

  • Enqueue Replication Server - Lock Table Size

    Note : I think I had posted it wrongly under ABAP Development, hence request moderator to kindly delete this post. Thanks
    Dear Experts,
    If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
    If enque server is configured in the same host as CI, it can be checked using
    ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
    As it is a Standalone Enqueue Server, I don't know where to check this value.
    Thanking you in anticipation.
    Best Regards
    L Raghunahth

    Hi
    Raghunath
    Check the following links
    http://help.sap.com/saphelp_nw2004s/helpdata/en/37/a2e3ab344411d3acb00000e83539c3/content.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/5efc11f3893672e10000000a114a6b/content.htm
    Regards
    Bhaskar

  • Who to make use of bridge table in the report

    hi all,
    I have two tales let us say T1 and T2, having may to may relation .. if i take the one column from T1 table and another column from T2 table in the report.. it is taking more time to generate. What is the problem, why they have interested Bridge table concept,
    let us say i have created one bridge table,TB,, so that i need a report generate the report having columns from T1 and T2 that time how to use bridge table ?. pls let me know if any one knew it.
    Thanks
    Raj

    Hi,
    Check this out. I'm not sure if itwould help, but it might: http://www.rittmanmead.com/2008/08/the-mystery-of-obiee-bridge-tables/
    J.

Maybe you are looking for

  • Firefox keeps crashing upon starting, how do I stop this?

    My PC crashed some days ago, after rebooting firefox was no longer working. It does start up, but doesn't allow me to do anything and after a few seconds it crashes and shows the error report, often this already happens the second I try to fire up fi

  • Message order in Papyrus sequence diagram xmi file

    Dear all, I would like to ask you if you could help me by explaining me how message ordering information of Papyrus sequence diagrams is represented in the corresponding xmi files. For each lifeline, I would assume that the order of messages is shown

  • Events visible at splash screen, then disappear

    Suddenly events on my additional internal drives are disappearing from the Event browser. Goes like this: Imported several clips in new event on big, almost unused internal drive (2 TB). Used clips from the new event in a new Project, edited fine and

  • RFC Connection remains open after portal logoff

    Hi, We have configured Universal Worklist in our EP7 EHP1 SPS4 Portal. When a user connects to the Portal and launches the UWL it creates an RFC connection to the backend ECC. However, when the user moves away from UWL iview or logs out or closes the

  • Updating Sybase from Oracle trigger

    Hi I have 2 databases: Oracle 8i and Sybase, both with a identical schema (same tables). An app will be updating the Oracle database, and I would like that automatically the Sybase database be updated accordingly, to keep them synchronized. I am thin