One table or many?

I'm just starting to re-write a web app and looking at the
database one of the tables is getting quite large (field wise)
The table in question is for room attributes but is used in 3
different apps. It's not particularly large in either direction and
isn't high demand on records so I suspect the overall answer to
this is in this case it doesn't matter but I'd be interested in
people's thoughts.
Do i have one large table with approx 90 fields which has
less query load as I'm not joining tables so much or do i have 3
tables (one for the basic room details (5-10 fields) one with the
equipment in the room (30 fields) etc. which would need joining
each time I used them but would be easier for writing queries as
select * would be appropriate most of the time and would arguably
be easier to maintain and modify (for both db and queries)?
The floor is now open!

Wow, hard to answer. What about your data model? Do you have
data in your table that is not normalized? Single table
"spreadsheet-like" database tables are an indicator of a missing or
inadequate data model and design. However, it is almost impossible
to know for sure without knowing the nature of your data, how your
entities relate, and what attributes are associated with these
entities, etc.
Phil

Similar Messages

  • One to Many with multiple tables on One side and one table on Many side

    Sorry for the confusion in the title. Here is my question. In my program, I have 2 different tables which store 2 different type of entities. Each one of entities has a list of attachments which I stored in a common attachment table. There is a one-to-many relationship between entity tables and attachment table.
    ENTITY_ONE (
    ID
    NAME
    ENTITY_TWO (
    ID
    NAME
    ATTACHMENTS (
    ID
    ENTITY_ID
    ATTACHMENT_NAME
    ENTITY_ID in ATTACHMENTS table is used to link attachments to either entity one or entity two. All IDs are generated by one sequence. So they are always unique. My question is how I could map this relationship into EntityOne, EntityTwo and Attachment JAVA class?

    For EntityOne and EntityTwo you can just define a normal OneToMany mapping using the foreign key.
    Are you using JPA, or the TopLink API? JPA requires a mappedBy for the OneToMany, so this may be more difficult. You should be able to just add a JoinColumn on the OneToMany and make the column insertable/updateable=false.
    For the attachment, you could either map the foreign key as a Basic (DirectToFieldMapping) and maintain it in your model, or use a VariableOneToOne mapping in TopLink (this will require the entities share a common interface).
    James : http://www.eclipselink.org : http://en.wikibooks.org/wiki/Java_Persistence

  • How to get missing records from one table

    I have one table with many records in the table. Each time a record is entered the date the record was entered is also saved in the table.
    I need a query that will find all the missing records in the table.
    so if I have in my table:
    ID          Date          Location
    1           4/1/2015        bld1
    2           4/2/2015        bld1
    3           4/4/2015        bld1
    I want to run a query like
    Select Date, Location FROM [table] WHERE (Date Between '4/1/2015' and '4/4/2015') and (Location = bld1)
    WHERE Date not in
    (Select Date, Location FROM [table])
    and the results would be:
    4/3/2015   bld1
    Thank you

    Do you have a table with all possible dates in it?  You can do a left join from that to your above mentioned table where the right side of the join is null.  If you don't have a table with all possible dates you could user a numbers table.
    Below is one way to achieve what you want with a numbers table...
    DECLARE @Table table (ID Int, DateField Date, Location VarChar(4))
    DECLARE @RunDate datetime
    SET @RunDate=GETDATE()
    IF OBJECT_ID('dbo.Numbers') IS NOT NULL 
    DROP TABLE NUMBERS
    SELECT TOP 10000 IDENTITY(int,1,1) AS Number
       into Numbers
        FROM sys.objects s1
        CROSS JOIN sys.objects s2
    ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
    INSERT INTO @Table (ID, DateField, Location)
    VALUES ('1','20150401','bld1')
    ,('1','20150402','bld1')
    ,('1','20150404','bld1');
    WITH AllDates
    as
    SELECT DATEADD(dd,N.Number,D.StartDate) as Dates
    FROM Numbers N
    cross apply (SELECT CAST('20150101' as Date) as StartDate) as D
    select * 
    from AllDates AD
    left join @Table T on AD.Dates = T.DateField
    where ad.Dates between '20150401' and '20150404'
    AND T.ID IS NULL
    LucasF

  • How to view one table used percentage in sqlplus?

    if the table only one record in dba_free_space,then use
    'select round(block_id/blocks,4)*100
    from dba_free_space
    where tablespace_name='xxx'"
    could list used percentage,
    but one table have many records in dba_free_space,
    how to display this table's used percentage?

    DBA_FREE_SPACE displays free extents in all tablespaces: there is no direct relationship with any given table.
    If you want to get free space in a existing table you should use DBMS_SPACE package. Here a a old but nice demo on AskTom.

  • Copying many rows from one table to another

    Could anyone tell me the best way to copy many rows (~1,000,000) from one table to another?
    I have supplied a snipit of code that is currently being used in our application. I know that this is probably the slowest method to copy the data, but I am not sure what the best way is to proceed. I was thinking that using BULK COLLECT would be better, but I do not know what would happen to the ROLLBACK segment if I did this. Also, should I look at disabling the indexes while the copy is taking place, and then re-enable them after it is complete?
    Sample of code currently being used:
    PROCEDURE Save_Data
    IS
    CURSOR SCursor IS
    SELECT     ROWID Row_ID
    FROM     TMP_SALES_SUMR tmp
    WHERE NOT EXISTS
    (SELECT 1
         FROM SALES_SUMR
         WHERE sales_ord_no = tmp.sales_ord_no
         AND cat_no = tmp.cat_no
         AND cost_method_cd = tmp.cost_method_cd);
    BEGIN
    FOR SaveRec IN SCursor LOOP
    INSERT INTO SALES_ORD_COST_SUMR
    SELECT *
    FROM TMP_SALES_ORD_COST_SUMR
    WHERE ROWID = SaveRec.Row_ID;
    RowCountCommit(); -- Performs a Commit for every xxxx rows
    END LOOP;
    COMMIT;
    EXCEPTION
    END Save_Data;
    This type of logic is used to copy data for about 8 different tables, each containing approximately 1,000,000 rows of data.

    Your best bet is
    Insert into SALES_ORD_COST_SUMR
    select * from TMP_SALES_ORD_COST_SUMR;
    commit;
    Read this
    http://asktom.oracle.com/pls/ask/f?p=4950:8:15324326393226650969::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:5918938803188
    VG

  • How many primary keys use in one table

    Hi,
    Please help me. Maximum How many primary keys use with in one table
    Regards,
    Sunil Kumar.T

    Hi,
    For my knowledge, It depends on the Database & version u r working.. Right..
    This is a sample description what I seen for a Database...
    Technical Specification of SAP DB Version 7.4
    Description                            Maximum Value
    Database size                    32 TB (with 8 KB page size)
    Number of files/volumes per database64...4096, specified by a configuration parameter
    File/volume size (data)      518 ...8 GB (dependent on operating system limitations)
    File/volume size (log)1      6 TB (dependent on operating system limitations)
    SQL statement length>=  16 KB (Default value 64 KB, specified by a system variable)
    SQL character string lengthSpecified by a system variable
    Identifier length                32 characters
    Numeric precision              38 places
    Number of tables unlimited
    Number of columns per table (with KEY)  1024
    Number of columns per table (without KEY)  1023
    Number of primary key columns per table  512
    Number of columns in an index  16
    Reward Points if useful

  • How many primary key fields  allowed for one table?

    hi,
    when i creating a table , how many primary key fields are allowed for one table.
    please any one give ans
    thanks

    Just checked it,  Its 255, not 155.  You can have as many key fields as you want, but you can not go over the 255 byte length for the total of all key fields.  You will get a warning for over 120, as it says that there is limited functionality with a key over 120 in length.
    Again, this is a total length of all key fields.
    Regards,
    Rich Heilman

  • How many trigger possible on one table?

    How many trigger possible on one table?

    Hi,
    shiv kumar wrote:
    How many trigger possible on one table?There's no limit.
    If you have more than one trigger that fires at the same time (for example "BEFORE INSERT OR UPDATE ... FOR EACH ROW"), ask yourself why you want or need 2 separate triggers, rather than one (which maybe calls two sparate procedures).

  • How to get the data from one table and insert into another table

    Hi,
    We have requirement to build OA page with the data needs to be populated from one table and on save data into another table.
    For the above requirement what the best way to implement in OAF.
    I understand that if we attach VO object instance to region/page, we only can pull and put data in to only one table.
    Thanks

    You can achieve this in many different ways, one is
    1. Create another VO based on the EO which is based on the dest table.
    2. At save, copy the contents of the source VO into the dest VO (see copy routine in dev guide).
    3. commiting the transaction will push the data into the dest table on which the dest VO is based.
    I understand that if we attach VO object instance to region/page, we only can pull and put data in to only one table.
    if by table you mean a DB table, then no, you can have a VO based on multiple EOs which will do DMLs accordingly.Thanks
    Tapash

  • How to read LONG RAW data from one  table and insert into another table

    Hello EVERYBODY
    I have a table called sound with the following attributes. in the music attribute i have stored some messages in the different language like hindi, english etc. i want to concatinate all hindi messages and store in the another table with only one attribute of type LONG RAW.and this attribute is attached with the sound item.
    when i click the play button of sound item the all the messages recorded in hindi will play one by one automatically. for that i'm doing the following.
    i have written the following when button pressed trigger which will concatinate all the messages of any selected language from the sound table, and store in another table called temp.
    and then sound will be played from the temp table.
    declare
         tmp sound.music%type;
         temp1 sound.music%type;
         item_id ITEM;
    cursor c1
    is select music
    from sound
    where lang=:LIST10;
    begin
         open c1;
         loop
              fetch c1 into tmp; //THIS LINE GENERATES THE ERROR
              temp1:=temp1||tmp;
              exit when c1%notfound;
         end loop;
    CLOSE C1;
    insert into temp values(temp1);
    item_id:=Find_Item('Music');
    go_item('music');
    play_sound(item_id);
    end;
    but when i'm clicking the button it generates the following error.
    WHEN-BUTTON-PRESSED TRIGGER RAISED UNHANDLED EXCEPTION ORA-06502.
    ORA-06502: PL/SQL: numeric or value error
    SQL> desc sound;
    Name Null? Type
    SL_NO NUMBER(2)
    MUSIC LONG RAW
    LANG CHAR(10)
    IF MY PROCESS TO SOLVE THE ABOVE PROBLEM IS OK THEN PLESE TELL ME THE SOLUTION FOR THE ERROR. OTHER WISE PLEASE SUGGEST ME,IF ANY OTHER WAY IS THERE TO SOLVE THE ABOVE PROBLEM.
    THANKS IN ADVANCE.
    D. Prasad

    You can achieve this in many different ways, one is
    1. Create another VO based on the EO which is based on the dest table.
    2. At save, copy the contents of the source VO into the dest VO (see copy routine in dev guide).
    3. commiting the transaction will push the data into the dest table on which the dest VO is based.
    I understand that if we attach VO object instance to region/page, we only can pull and put data in to only one table.
    if by table you mean a DB table, then no, you can have a VO based on multiple EOs which will do DMLs accordingly.Thanks
    Tapash

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • Number of Records in one table

    Hi All,
    Can anybody let me know that in Oracle9i lite database, one table can have how many records. I mean is their any restriction on maximum number of ROWs for a table, like Palm database (PDB files) has a restriction that one table can have maximum 64000 rows.
    Is the same thing happen with Oracle lite database also
    Thansk in advance.

    We have had several tests of hundreds of thousands of rows using 9iLite on a PocketPC host. The upper bound appears to be based upon the physical limit of the host and not 9iLite.
    RP.

  • [AS] How to test the presence of at least one table?

    Hello everyone,
    I would like to test for the presence of at least one table in a document before starting a process (on edge strokes). 
    I found this, but I do not know if this is really effective:
                                  set CountOfTables1 to count of tables of every story
                                  set CountOfTables2 to every table of every story
    The first gives me a list of the number of table in each story; the second gives me the objects reference of every table.
    Is there another way?
    TIA.
    Oli.

    Marc
    The test I did for nested tables stank (table pasted in rectangle and that rectangle pasted in a table ).  It does not work for nested tables
    I tested .isValid and it's a lot slower.
    Uwe
    Yes, I noticed that difference after posting my eeakk comment.
    Using slice(0) after the getElements can make a big difference but still the simple length going to be quicker in this case.
    Also the getElements without the slice(0) will return "too many elements" if there are to many elements (between 1500 - 15,000).
    On the other hand for repeated access of the variable (looping) it's as know normally quicker to use the getElements().slice(0).length than just length
    In summary
    1) Your anyItem() method is going to be very quick on a document which has a high ratio of stories containing tables.
    2) Although your method could and possibly probably be 100 times quicker than mine I would definitely use my method in terms of typing time and space verses the 1/2 second it might save on execution time per 10,000 tables.
    3) The only accurate method (in this thread) for counting the tables including nested and footnotes is Marc's Grep method.
    So I guess the 3 of us can share first place.
    I just wonder if using the same technique as Marc used in our discussion sometime back on nested buttons might get to a quick count than using the Grep method here.
    Clearly needs to be repeated on different types of document setups one can try the below
    if (!app.properties.activeDocument) {alert ("Really!?"); exit()};
    var scriptCount = 1;
    // Script 1 table.length
    var doc = app.activeDocument,
          start = Date.now(),
          t = doc.stories.everyItem().tables.length,
          finish = Date.now();
    $.writeln ("\rtable.length Script (" + scriptCount++ + ") took " + (finish - start) + "ms\r" + ((t) ? t + " Table" + ((t>1) ? "s" : "") : "Diddlysquat"));
    // Script 2 getElements
    var doc = app.activeDocument;
    var start = Date.now();
    var t = doc.stories.everyItem().tables.everyItem().getElements().length;
    var finish = Date.now();
    $.writeln ("\rgetElements Script (" + scriptCount++ + ") took " + (finish - start) + "ms\r" + ((t) ? t + " Table" + ((t>1) ? "s" : "") : "Diddlysquat"));
    // Script 3 getElements.slice(0)
    var doc = app.activeDocument;
    var start = Date.now();
    var t = doc.stories.everyItem().tables.everyItem().getElements().slice(0).length;
    var finish = Date.now();
    $.writeln ("\rgetElements.slice(0) Script (" + scriptCount++ + ") took " + (finish - start) + "ms\r" + ((t) ? t + " Table" + ((t>1) ? "s" : "") : "Diddlysquat"));
    // Script 4      isValid
    var start = Date.now();
    var t = doc.stories.everyItem().tables.everyItem().isValid;
    var finish = Date.now();
    $.writeln ("\risValid Script (" + scriptCount++ + ") took " + (finish - start) + "ms\rThe document contains " + ((t) ? "tables" : "no tables"));
    // Script 5   Marc's Grep
    var start = Date.now();
    var t = countTables();
    var finish = Date.now();
    $.writeln ("\rMarc's Grep Script as said only accurate one but slow (" + scriptCount++ + ") \rtook " + (finish - start) + "ms\r" + ((t) ? t + " Table" + ((t>1) ? "s" : "") : "Diddlysquat"));
    // Script 6 very lot of anyItem
    var start = Date.now();
    var myResult = atLeastOneTableInDoc(app.documents[0]);
    var finish = Date.now();
    $.writeln ("\rUwes Anyone for Bingo Script (" + scriptCount++ + ") took " + (finish - start) + "ms\rThe document contains " + ((myResult) ? "tables" : "no tables"));
    // Script 7 anyItem length
    var start = Date.now();
    var myResult = detectATable();
    var finish = Date.now();
    $.writeln ("\ranyItem length Script (" + scriptCount++ + ") took " + (finish - start) + "ms\rThe document contains " + ((myResult) ? "tables" : "no tables"));
    // Script 8 anyItem elements length
    var start = Date.now();
    var myResult = detectATable2();
    var finish = Date.now();
    $.writeln ("\ranyItem elements length Script (" + scriptCount++ + ") took " + (finish - start) + "ms\rThe document contains " + ((myResult) ? "tables" : "no tables"));
    //FUNCTION USING anyItem() EXTENSIVELY:
    function atLeastOneTableInDoc(myDoc){
        var myResult = 0;
        if(myDoc.stories.length === 0){return myResult};
        var myStories = myDoc.stories;
        //LOOP length == length of all Story objects
        //using anyItem() for checking length of Table objects
        for(var n=0;n<myStories.length;n++){
            if(anyStory = myStories.anyItem().tables.length > 0){
                myResult = 1;
                return myResult;
        //FALL-BACK, if anyItem() will fail:
    //EDIT:    if(!myResult){
            for(var n=0;n<myStories.length;n++){
                if(myStories[n].tables.length > 0){
                    myResult = 2;
                    return myResult;
    //EDIT:       };
        return myResult;
    }; //END function atLeastOneTableInDoc(myDoc)
    function detectATable(){
        s=app.documents[0].stories;
         if(s.anyItem().tables.length){
            return true; // Bingo
       for(var n=0;n<s.length;n++){
            if(s[n].tables.length) return true
        return false
    function detectATable2(){
        s=app.documents[0].stories;
         if(s.anyItem().tables.length){
            return true; // Bingo
        var sl = app.documents[0].stories.everyItem().getElements().slice(0);
       for(var n=0;n<s.length;n++){
            if(s[n].tables.length) return true
        return false
    function countTables()
        app.findTextPreferences = null;
        app.findTextPreferences.findWhat = "\x16";
        var t = app.properties.activeDocument||0;
        return t&&t.findText().length;
    P.s.
    Marc,
    A bit of homework. I didn't test it on the above but I found the your highRes function trimmer seems to have a large favoritism to the first function to compare.
    I was testing 2 prototypes and swapping the order swapped the result. One can make the prototypes the same and see the time difference.

  • Query performance on same table with many DML operations

    Hi all,
    I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
    The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
    If i created same table again newly with same data and fire the same select statement, it is taking less time.
    My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
    Thanks in advance,
    Pal

    Try searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
    As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
    For example if you had a table like this where seq_no is populated by a sequence and indexed
    seq_no         NUMBER
    processed_flag VARCHAR2(1)
    trans_date     DATEand then did deletes like:
    DELETE FROM t
    WHERE processed_flag = 'Y' and
          trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
    HTH
    John

  • How to improve speed of queries that use ORM one table per concrete class

    Hi,
    Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
    CREATE TABLE ABSTRACTPRODUCT (
        ID VARCHAR(8) NOT NULL,
        DESCRIPTION VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    CREATE TABLE PRODUCT (
        ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
        CODE VARCHAR(10) NOT NULL,
        PRICE DECIMAL(12,2),
        PRIMARY KEY(ID)
    CREATE UNIQUE INDEX iProduct ON Product(code)
    CREATE TABLE BOOK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        AUTHOR VARCHAR(60) NOT NULL,
        PRIMARY KEY (ID)
    CREATE TABLE COMPACTDISK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        ARTIST VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    there is a way to improve queries like
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        pd.code like '101%'
    or like this:
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        abpd.description like '%STARS%' AND
        pd.price BETWEEN 1 AND 10
    think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
    note: with consolidation i will miss NOT NULL constraint at database side.
    thanks for any insight.
    Clóvis

    Hi Lars,
    i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
    indexes of TipoMovimento
    INDEXNAME     COLUMNNAME          SORT     COLUMNNO     DATATYPE     LEN     INDEX_USED     FILESTATE     DISABLED
    ITIPOMOVIMENTO     TIPO               ASC     1          VARCHAR          2     220546          OK          NO
    ITIPOMOVIMENTO     ID_SYS               ASC     2          CHAR          6     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_DEBITO          ASC     3          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_CREDITO     ASC     4          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO1     ID_SYS               ASC     1          CHAR          6     567358          OK          NO
    ITIPOMOVIMENTO2     DESCRICAO          ASC     1          VARCHAR          60     94692          OK          NO
    after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
    OWNER     TABLENAME     COLUMN_OR_INDEX          STRATEGY                    PAGECOUNT     
         TC          ITITULOCOBRANCA1     RANGE CONDITION FOR INDEX          5368     
                   DATA_VENCIMENTO               (USED INDEX COLUMN)          
         MF          OID               JOIN VIA KEY COLUMN               9427     
         TM          OID               JOIN VIA KEY COLUMN               22     
                                  TABLE HASHED          
         PS          OID               JOIN VIA KEY COLUMN               1350     
         BOL          OID               JOIN VIA KEY COLUMN               497     
                                       NO TEMPORARY RESULTS CREATED          
         JDBC_CURSOR_19                    RESULT IS COPIED   , COSTVALUE IS     988
    note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
    i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
    ) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
    MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
    if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a   FUNCTION that join data of two tables can solve this?
    about instance configuration it is:
    Machine:
    Version:       '64BIT Kernel'
    Version:       'X64/LIX86 7.6.03   Build 007-123-157-515'
    Version:       'FAST'
    Machine:       'x86_64'
    Processors:    2 ( logical: 8, cores: 8 )
    data volumes:
    ID     MODE     CONFIGUREDSIZE     USABLESIZE     USEDSIZE     USEDSIZEPERCENTAGE     DROPVOLUME     TOTALCLUSTERAREASIZE     RESERVEDCLUSTERAREASIZE     USEDCLUSTERAREASIZE     PATH     
    1     NORMAL     4194304          4194288          379464          9               NO          0               0               0               /db/SPDT/data/data01.dat     
    2     NORMAL     4194304          4194288          380432          9               NO          0               0               0               /db/SPDT/data/data02.dat     
    3     NORMAL     4194304          4194288          379184          9               NO          0               0               0               /db/SPDT/data/data03.dat     
    4     NORMAL     4194304          4194288          379624          9               NO          0               0               0               /db/SPDT/data/data04.dat     
    5     NORMAL     4194304          4194288          380024          9               NO          0               0               0               /db/SPDT/data/data05.dat
    log volumes:
    ID     CONFIGUREDSIZE     USABLESIZE     PATH               MIRRORPATH
    1     51200          51176          /db/SPDT/log/log01.dat     ?
    parameters:
    KERNELVERSION                         KERNEL    7.6.03   BUILD 007-123-157-515
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      ISO
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   /db/SPDT/log/log01.dat
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   6400
    DATA_VOLUME_NAME_0005                 /db/SPDT/data/data05.dat
    DATA_VOLUME_NAME_0004                 /db/SPDT/data/data04.dat
    DATA_VOLUME_NAME_0003                 /db/SPDT/data/data03.dat
    DATA_VOLUME_NAME_0002                 /db/SPDT/data/data02.dat
    DATA_VOLUME_NAME_0001                 /db/SPDT/data/data01.dat
    DATA_VOLUME_TYPE_0005                 F
    DATA_VOLUME_TYPE_0004                 F
    DATA_VOLUME_TYPE_0003                 F
    DATA_VOLUME_TYPE_0002                 F
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0005                 524288
    DATA_VOLUME_SIZE_0004                 524288
    DATA_VOLUME_SIZE_0003                 524288
    DATA_VOLUME_SIZE_0002                 524288
    DATA_VOLUME_SIZE_0001                 524288
    DATA_VOLUME_MODE_0005                 NORMAL
    DATA_VOLUME_MODE_0004                 NORMAL
    DATA_VOLUME_MODE_0003                 NORMAL
    DATA_VOLUME_MODE_0002                 NORMAL
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    LOG_IO_BLOCK_COUNT                    8
    DATA_IO_BLOCK_COUNT                   64
    BACKUP_BLOCK_CNT                      64
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                8
    MAX_LOG_QUEUE_COUNT                   0
    USED_MAX_LOG_QUEUE_COUNT              8
    LOG_QUEUE_COUNT                       1
    MAXUSERTASKS                          500
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             7
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        8
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    RESERVEDSERVERTASKS                   16
    MINSERVERTASKS                        28
    MAXSERVERTASKS                        28
    _MAXGARBAGE_COLL                      1
    _MAXTRANS                             4008
    MAXLOCKS                              120080
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       180
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _IOPROCS_PER_DEV                      2
    _IOPROCS_FOR_PRIO                     0
    _IOPROCS_FOR_READER                   0
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          131072
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     32768
    _MBLOCK_STACK_SIZE                    32768
    _MBLOCK_STRAT_SIZE                    16384
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      131072
    INIT_ALLOCATORSIZE                    262144
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;63*us;
    _TASKCLUSTER_03                       equalize
    _DYN_TASK_STACK                       NO
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    DEFAULT
    _MP_RGN_BUSY_WAIT                     DEFAULT
    _MP_DISP_LOOPS                        2
    _MP_DISP_PRIO                         DEFAULT
    MP_RGN_LOOP                           -1
    _MP_RGN_PRIO                          DEFAULT
    MAXRGN_REQUEST                        -1
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _MAXTASK_STACK                        512
    MAX_SERVERTASK_STACK                  500
    MAX_SPECIALTASK_STACK                 500
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            262144
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      64
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              64
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      32
    CLUSTER_WRITE_THRESHOLD               80
    CLUSTERED_LOBS                        NO
    RUNDIRECTORY                          /var/opt/sdb/data/wrk/SPDT
    OPMSG1                                /dev/console
    OPMSG2                                /dev/null
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        2
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        20
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       5369
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _SHMKERNEL                            44601
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-03 23:12:55
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    SHOW_MAX_KB_STACK_USE                 NO
    LOG_SEGMENT_SIZE                      2133
    _COMMENT                              
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    OFFICIAL_NODE                         
    UKT_CPU_RELATIONSHIP                  NONE
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    30
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       YES
    USE_OPEN_DIRECT_FOR_BACKUP            NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    JOIN_TABLEBUFFER                      128
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             YES
    SHAREDSQL_CLEANUPTHRESHOLD            25
    SHAREDSQL_COMMANDCACHESIZE            262144
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    FORBID_LOAD_BALANCING                 YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    ENABLE_CHECK_INSTANCE                 YES
    RTE_TEST_REGIONS                      0
    HASHED_RESULTSET                      YES
    HASHED_RESULTSET_CACHESIZE            262144
    CHECK_HASHED_RESULTSET                0
    AUTO_RECREATE_BAD_INDEXES             NO
    AUTHENTICATION_ALLOW                  
    AUTHENTICATION_DENY                   
    TRACE_AK                              NO
    TRACE_DEFAULT                         NO
    TRACE_DELETE                          NO
    TRACE_INDEX                           NO
    TRACE_INSERT                          NO
    TRACE_LOCK                            NO
    TRACE_LONG                            NO
    TRACE_OBJECT                          NO
    TRACE_OBJECT_ADD                      NO
    TRACE_OBJECT_ALTER                    NO
    TRACE_OBJECT_FREE                     NO
    TRACE_OBJECT_GET                      NO
    TRACE_OPTIMIZE                        NO
    TRACE_ORDER                           NO
    TRACE_ORDER_STANDARD                  NO
    TRACE_PAGES                           NO
    TRACE_PRIMARY_TREE                    NO
    TRACE_SELECT                          NO
    TRACE_TIME                            NO
    TRACE_UPDATE                          NO
    TRACE_STOP_ERRORCODE                  0
    TRACE_ALLOCATOR                       0
    TRACE_CATALOG                         0
    TRACE_CLIENTKERNELCOM                 0
    TRACE_COMMON                          0
    TRACE_COMMUNICATION                   0
    TRACE_CONVERTER                       0
    TRACE_DATACHAIN                       0
    TRACE_DATACACHE                       0
    TRACE_DATAPAM                         0
    TRACE_DATATREE                        0
    TRACE_DATAINDEX                       0
    TRACE_DBPROC                          0
    TRACE_FBM                             0
    TRACE_FILEDIR                         0
    TRACE_FRAMECTRL                       0
    TRACE_IOMAN                           0
    TRACE_IPC                             0
    TRACE_JOIN                            0
    TRACE_KSQL                            0
    TRACE_LOGACTION                       0
    TRACE_LOGHISTORY                      0
    TRACE_LOGPAGE                         0
    TRACE_LOGTRANS                        0
    TRACE_LOGVOLUME                       0
    TRACE_MEMORY                          0
    TRACE_MESSAGES                        0
    TRACE_OBJECTCONTAINER                 0
    TRACE_OMS_CONTAINERDIR                0
    TRACE_OMS_CONTEXT                     0
    TRACE_OMS_ERROR                       0
    TRACE_OMS_FLUSHCACHE                  0
    TRACE_OMS_INTERFACE                   0
    TRACE_OMS_KEY                         0
    TRACE_OMS_KEYRANGE                    0
    TRACE_OMS_LOCK                        0
    TRACE_OMS_MEMORY                      0
    TRACE_OMS_NEWOBJ                      0
    TRACE_OMS_SESSION                     0
    TRACE_OMS_STREAM                      0
    TRACE_OMS_VAROBJECT                   0
    TRACE_OMS_VERSION                     0
    TRACE_PAGER                           0
    TRACE_RUNTIME                         0
    TRACE_SHAREDSQL                       0
    TRACE_SQLMANAGER                      0
    TRACE_SRVTASKS                        0
    TRACE_SYNCHRONISATION                 0
    TRACE_SYSVIEW                         0
    TRACE_TABLE                           0
    TRACE_VOLUME                          0
    CHECK_BACKUP                          NO
    CHECK_DATACACHE                       NO
    CHECK_KB_REGIONS                      NO
    CHECK_LOCK                            NO
    CHECK_LOCK_SUPPLY                     NO
    CHECK_REGIONS                         NO
    CHECK_TASK_SPECIFIC_CATALOGCACHE      NO
    CHECK_TRANSLIST                       NO
    CHECK_TREE                            NO
    CHECK_TREE_LOCKS                      NO
    CHECK_COMMON                          0
    CHECK_CONVERTER                       0
    CHECK_DATAPAGELOG                     0
    CHECK_DATAINDEX                       0
    CHECK_FBM                             0
    CHECK_IOMAN                           0
    CHECK_LOGHISTORY                      0
    CHECK_LOGPAGE                         0
    CHECK_LOGTRANS                        0
    CHECK_LOGVOLUME                       0
    CHECK_SRVTASKS                        0
    OPTIMIZE_AGGREGATION                  YES
    OPTIMIZE_FETCH_REVERSE                YES
    OPTIMIZE_STAR_JOIN                    YES
    OPTIMIZE_JOIN_ONEPHASE                YES
    OPTIMIZE_JOIN_OUTER                   YES
    OPTIMIZE_MIN_MAX                      YES
    OPTIMIZE_FIRST_ROWS                   YES
    OPTIMIZE_OPERATOR_JOIN                YES
    OPTIMIZE_JOIN_HASHTABLE               YES
    OPTIMIZE_JOIN_HASH_MINIMAL_RATIO      1
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_MINSIZE        1000000
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_QUAL_ON_INDEX                YES
    DDLTRIGGER                            YES
    SUBTREE_LOCKS                         NO
    MONITOR_READ                          2147483647
    MONITOR_TIME                          2147483647
    MONITOR_SELECTIVITY                   0
    MONITOR_ROWNO                         0
    CALLSTACKLEVEL                        0
    OMS_RUN_IN_UDE_SERVER                 NO
    OPTIMIZE_QUERYREWRITE                 OPERATOR
    TRACE_QUERYREWRITE                    0
    CHECK_QUERYREWRITE                    0
    PROTECT_DATACACHE_MEMORY              NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FILEDIR_SPINLOCKPOOL_SIZE             10
    TRANS_HISTORY_SIZE                    0
    TRANS_THRESHOLD_VALUE                 60
    ENABLE_SYSTEM_TRIGGERS                YES
    DBFILLINGABOVELIMIT                   70L80M85M90H95H96H97H98H99H
    DBFILLINGBELOWLIMIT                   70L80L85L90L95L
    LOGABOVELIMIT                         50L75L90M95M96H97H98H99H
    AUTOSAVE                              1
    BACKUPRESULT                          1
    CHECKDATA                             1
    EVENT                                 1
    ADMIN                                 1
    ONLINE                                1
    UPDSTATWANTED                         1
    OUTOFSESSIONS                         3
    ERROR                                 3
    SYSTEMERROR                           3
    DATABASEFULL                          1
    LOGFULL                               1
    LOGSEGMENTFULL                        1
    STANDBY                               1
    USESELECTFETCH                        YES
    USEVARIABLEINPUT                      NO
    UPDATESTAT_PARALLEL_SERVERS           0
    UPDATESTAT_SAMPLE_ALGO                1
    SIMULATE_VECTORIO                     IF_OPEN_DIRECT_OR_RAW_DEVICE
    COLUMNCOMPRESSION                     YES
    TIME_MEASUREMENT                      NO
    CHECK_TABLE_WIDTH                     NO
    MAX_MESSAGE_LIST_LENGTH               100
    SYMBOL_RESOLUTION                     YES
    PREALLOCATE_IOWORKER                  NO
    CACHE_IN_SHARED_MEMORY                NO
    INDEX_LEAF_CACHING                    2
    NO_SYNC_TO_DISK_WANTED                NO
    SPINLOCK_LOOP_COUNT                   30000
    SPINLOCK_BACKOFF_BASE                 1
    SPINLOCK_BACKOFF_FACTOR               2
    SPINLOCK_BACKOFF_MAXIMUM              64
    ROW_LOCKS_PER_TRANSACTION             50
    USEUNICODECOLUMNCOMPRESSION           NO
    about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
    best regards
    Clóvis

Maybe you are looking for

  • Is It Possible To Create Different Personnel Number For One Employee

    HI Experts,         My Client need to get Different Personnel number for one employee,i.e for Trainee one Personnel number need to generate and same person when moving from Trainee to Probation another new personnel number should generate and again w

  • Import service Purchase order

    Hi Experts, I am facing  this problem when i am creating import service Purchase order with Purchase requisition. my import pricing procedure have Assessible value percentage condition type,which is coming in import service PO with certain percentage

  • Problem with Live Chat

    I choose "Live Chat" from the Contact Us page, then I choose Prepaid Services (my topic), and it tells me there are agents available, but I am lost.  If I click where it tells me there are agents available, I go nowhere.  Any suggestions?

  • How do you create a form to capture customer information

    Hello, I have been looking for a form to capture customer information.  Name, address, phone, email, & maybe even which service they're interested in. They fill out the form and then there's a submit button at the end which they click to send the inf

  • Front row, where's the app and login problem....

    Hi all ! When a use front row via the apple remote, each time my Imac core duo goes to sleep, and i try to change playlist in itunes it ask me for my login/password... So i have to get up and enter the password, annoying since i'm in the bed trying t