Advanced table with many columns

I have a advanced table with more or less 60 columns and the table is greater than the header image. How can I align the table with the advanced table?
Thanks.
Edited by: Vieira on 3-giu-2010 1.01

bedabrata wrote:
Hi,
I had a question. Can we fix the size of the lolumns in a table region. I was unable to do so using the property width of the items. So tried the below code:
// Set width for all the columns
OATableBean tableBean = (OATableBean)webBean.findIndexedChildRecursive("HeaderDetailsRN") ;
OAMessageStyledTextBean ColumnStyle;
ColumnStyle = (OAMessageStyledTextBean)tableBean.findIndexedChildRecursive("SblOrderNumber");
CSSStyle cellUpperCase = new CSSStyle();
cellUpperCase.setProperty("width","150px");
if (ColumnStyle != null)
ColumnStyle.setInlineStyle(cellUpperCase);
Now the width is getting set for the cloumn which has values (data) but it is not setting the width for columns which has null value or no data.
Could any of you guide me on how to set the width for every column of a table region.
Thanks
BedabrataOpen an other topic for this. ;-)
Edited by: Vieira on 3-giu-2010 9.27

Similar Messages

  • Sparse table with many columns

    Hi,
    I have a table that contains around 800 columns. The table is a sparse table such that many rows
    contain up to 50 populated columns (The others contain NULL).
    My questions are:
    1. Table that contains many columns can cause a performance problem? Is there an alternative option to
    hold table with many columns efficiently?
    2. Does a row that contains NULL values consume storage space?
    Thanks
    dyahav

    [NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
    A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
    Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
    Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
    Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
    Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
    My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
    HTH!

  • Select count(x) on a table with many column numbers?

    Hi all,
    i have a table with physical data with 850 (!!) colums and
    ~1 Million rows.
    The select count(cycle)from test_table Statement is very, very slow
    WHY?
    The select count(cycle)from test_table is very fast by e.g 10 Colums. WHY?
    What has the number of columns, to do with the SELECT count(cyle).... statement?
    create test_table(
    cycle number primary key,
    stamp date,
    sensor 1 number,
    sensor 2 number,
    sensor_849 number,
    sensor_850 number);
    on W2K Oracle 9i Enterprise Edition Release 9.2.0.4.0 Production
    Can anybody help me?
    Many Thanks
    Achim

    hi lennert, hi all,
    many thanks for all the answers. I�m not an Oracle expert.
    Sorry for my english.
    Hi Lennert,
    you are right, what must i do to use the index in the
    query? Can you give me a pointer of direction, please?
    Many greetings
    Achim
    select count(*) from w4t.v_tfmc_3_blocktime;
    COUNT(*) ==> Table with 3 columns (very fast)
    306057
    Ausf�hrungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _BLOCKTIME'    
    Statistiken
    0 recursive calls
    0 db block gets
    801 consistent gets
    794 physical reads
    0 redo size
    388 bytes sent via SQL*Net to client
    499 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    select count(*) from w4t.v_tfmc_3_value;
    COUNT(*)==> Table with 850 columns (very slow)
    64000
    Ausf�hrungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _VALUE'    
    Statistiken
    1 recursive calls
    1 db block gets
    48410 consistent gets
    38791 physical reads
    13068 redo size
    387 bytes sent via SQL*Net to client
    499 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

  • No records in Azure databrowser viewing tables with many columns.

    Yesterday I encountered an issue while browsing a table created in Azure.
    I created a new database in Azure and in this database I created and populated several tables, containing one very big table.
    The big table has 239 columns.
    I succeeded in populating this table with our in-company table-data, means by a dtsx-package. No problem this far.
    When I query the table from SQL Server Management Studio, I get correct results.
    However, the databrowser on the azure-site itself does not show any data for this table. That’s a little disappointing regarding the fact that there are more than 76000 records in this table. Refresh didn’t help.
    When I browse smaller tables with less data-columns, I do get data in this data-browser.
    Is this a known issue or do you know a solution for this issue ?
    Kind regards,
    Fred Silven
    AEB Amsterdam
    The Netherlands.

    Hello,
    Based on your description, you want to edit data of a large table in the Management Portal of SQL database, but it is not return rows in GUI Design tab.  Can you get the data when select "TOP 200 rows"?
    Since there are 239 columns and 76000 rows in the table, the Portal may take a bit long time to load all data in GUI. Please try to using T-SQL statement to perform the select or update operation and specify condition in WHERE clause to load the
    needed data.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Many to many join table with different column names

    Hi have a joint table with different column names as foreign keys in the joining
    tables...
    e.g. i have a many to many reltnshp btwn Table A and Table B ..and join table
    C
    both have a column called pk.
    and the join table C has columns call fk1 and fk2
    does cmd require the same column name in the join table as in the joining table?
    are there any workarounds?
    thanks

    HI,
    No, the foreign key column names in the join table do not have to match the primary
    key names in the joined tables.
    -thorick

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • Error while importing a table with BLOB column

    Hi,
    I am having a table with BLOB column. When I export such a table it gets exported correctly, but when I import the same in different schema having different tablespace it throws error
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE TABLE "CMM_PARTY_DOC" ("PDOC_DOC_ID" VARCHAR2(10), "PDOC_PTY_ID" VAR"
    "CHAR2(10), "PDOC_DOCDTL_ID" VARCHAR2(10), "PDOC_DOC_DESC" VARCHAR2(100), "P"
    "DOC_DOC_DTL_DESC" VARCHAR2(100), "PDOC_RCVD_YN" VARCHAR2(1), "PDOC_UPLOAD_D"
    "ATA" BLOB, "PDOC_UPD_USER" VARCHAR2(10), "PDOC_UPD_DATE" DATE, "PDOC_CRE_US"
    "ER" VARCHAR2(10) NOT NULL ENABLE, "PDOC_CRE_DATE" DATE NOT NULL ENABLE) PC"
    "TFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS"
    " 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "TS_AGIMSAPPOLOLIVE030"
    "4" LOGGING NOCOMPRESS LOB ("PDOC_UPLOAD_DATA") STORE AS (TABLESPACE "TS_AG"
    "IMSAPPOLOLIVE0304" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10 NOCACHE L"
    "OGGING STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEF"
    "AULT))"
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'TS_AGIMSAPPOLOLIVE0304' does not exist
    I used the import command as follows :
    imp <user/pwd@conn> file=<dmpfile.dmp> fromuser=<fromuser> touser=<touser> log=<logfile.log>
    What can I do so that this table gets imported correctly?
    Also tell me "whether the BLOB is stored in different tablespace than the default tablespace of the user?"
    Thanks in advance.

    Hello,
    U can either
    1) create a tablespace with the same name in destination where you are trying to import.
    2) get the ddl of the table, modify the tablespace name to reflect the existing tablespace name in destination and run the ddl in the destination database, and run your import command with option ignore=y--> which will ignore all the create errors.
    Regards,
    Vinay

  • How to get current row of Advanced table with no submit

    I have an advanced table with an 'Add another button'. When an empty row is created by clicking this button, LOV is used on the first column to populate other fields of the row. The rows of the table carry a button called "Add dependencies" whose functionality is to take the user to a different table to add rows to a different VO. This button is just a button and not a submitButton. I am passing the row's primary key as part of the URI as Vo attribute FdTaskId={@FdTaskId}
    The button works perfectly fine for database fetched rows (rows already present in the database), but, for the rows created using 'Add Another button', though LOV populates all fields including the primary key, the URI parameter passes null to the next page. That means, the VO attribute is not set. This works fine only when a 'submit' happens in the first page. For example, if I click Add another row again, then the previously inserted row passes the primary key fine as part of {@FdTaskId}.
    Could you please help me how I can resolve this issue?
    Thanks,
    Datta

    I am doing exactly that. My issues is that, VO attribute is passed correct for the table rows which are fetched from the database. For the table rows which got created through 'Add Another button', the VO attribute passed through URI as {@FdTaskId} is returning null. The VO attribute sets only when a form submit happens. For example, after adding a row through 'Add another row' button, click the same button again to add one more row and now go back to previously added row and now you can see that its VO attribute is set ({@FdTaskId} carries value )

  • How to create a table with editable column values.

    Hi everyone,
    I think this is very simple but i'm not able to find how to do this. This is my requirement. I need to create a table with n columns and 1 row initially. user should be able to enter data into this table and click of a button should insert data into the data base table. Also there should be a button at the bottom of the table to add 1 row to the table.
    I know how to do the insert to database, but can anyone please let me know how to create a table that allows user to enter data and how to create a add 1 row button?
    Thanks in Advance!

    Raghu,
    Go through the ToolBox tutorial Create Page & Advanced table section of OAF Guide.
    Step 1 - You require to create EO & a VO based on this EO. This EO will be of DataBase table where you want to insert the data.
    Step 2 - Create a Advanced table region. (Refer this Adavanced table section for more on this)
    Step 3 - Attach this VO in the BC4J component of Adavanced Table region.
    Regards,
    Gyan

  • Table with 300 columns

    While talking with a friend, he was telling me that sometimes he uses tables with 300 columns which will only be accessed once for information extraction. For me this is a bit strange because 300 columns looks like a lot of columns for me...
    Having tables with so many columns is a common practice in Data Warehousing? If yes, can you explain me on which situations this can happen and how often?
    Regards

    We have tables with cca 150 columns.
    You need to know that we are not talking about relational systems that are normalized. Its common practise when bilding data warehouse. Number of columns in your tables will depend on complexity and size of your warehouse, and number, size and complexity of your source systems.
    Point is that even now, when data warehouse is "finished", we still add/remove columns as warehouse changes as the source systems change.

  • BI Answers - need 2 compound layouts with tables with different columns

    Hi,
    I need 2 tables in the same report with different columns displaying. Is this possible?
    I wanted to put them in their own compound layout and call each one with a view selector,
    but it seems impossible to create a 2nd table with different columns in the one report.
    Each time I try and replace one of the columns in the 2nd table with a different column,
    a message appears saying the deleted column will be deleted from all views.
    Many thanks for anyone's help.
    - Jenny

    As per my knowledge - your requirement can be done with Pivot view but not with regular table view.
    We dont have exclude column - functionality in table view.
    Else, instead of creating in same report create the req. with 2 separ. reports but place them on single dashboard in 2 sep. sections.

  • ORA-00939 when creating XML table with Virtual Columns

    Getting error on creating table with VIRTUAL COLUMNS:
    Error at Command Line:4 Column:31
    Error report:
    SQL Error: ORA-00939: too many arguments for function
    00939. 00000 - "too many arguments for function"
    Without VIRTUAL COLUMNS works fine.
    Where to start?
    Is it possible to add Virtual Columns after a table is created?
    CREATE TABLE TDS_XML OF XMLType
    XMLSCHEMA "http://xmlns.abc.com/tds/TDSSchemaGen2.xsd"
    ELEMENT "TDSTestData"
      VIRTUAL COLUMNS
      TESTID AS (
        XMLCast(
                  XMLQuery('declare default element namespace "http://xmlns.abc.com/tds/TDSSchemaGen2.xsd"; /TDSTestData/TestID' PASSING OBJECT_VALUE RETURNING CONTENT)  AS VARCHAR2(32)
       )SQL*Plus: Release 11.2.0.2.0 Production on Mon Apr 30 20:17:29 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE 11.2.0.2.0 Production
    TNS for 64-bit Windows: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL>

    victor_shostak wrote:
    Figured, Virtual Columns work only for Binary XML.They are only supported, currently, for Binary XML.

  • Performance of large tables with ADT columns

    Hello,
    We are planning to build a large table ( 1 billion+ rows) with one of the columns being an Advanced Data Type (ADT) column. The ADT column will be based on a TYPE that will have approximately (250 attributes).
    We are using Oracle 10g R2
    Can you please tell me the following:
    1. How will Oracle store the data in the ADT column?
    2. Will the entire ADT record fit in one block?
    3. Is it still possible to partition a table on an attribute that is part of the ADT?
    4. How will the performace be affected if Oracle does a full table scan of such a table?
    5. How much space will Oracle take, if any, for storing a NULL in an ADT?
    I think we can create indexes on the attribute of the ADT column. Please let me know if this is not true.
    Thanks for your help.

    I agree with D.Morgan that object type with 250 attributes is doubtful.
    I don't like object tables (tables with "row objects") too.
    But, your table is a relational table with object column ("column object").
    C.J.Date in An introduction to Database Systems (2004, page 885) says:
    "... object/relational systems ... are, or should be, basically just relational systems
    that support the relational domain concept (i.e., types) properly - in other words, true relational systems,
    meaning in particular systems that allow users to define their own types."
    1. How will Oracle store the data in the ADT column?...
    For some answers see:
    “OR(DBMS) or R(DBMS), That is the Question”
    http://www.quest-pipelines.com/pipelines/plsql/tips.htm#OCTOBER
    and (of course):
    "Oracle® Database Application Developer's Guide - Object-Relational Features" 10g Release 2 (10.2)
    http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14260/adobjadv.htm#i1006903
    Regards,
    Zlatko

  • Troubles when trying to transfer a table with hierarchical column to a tree

    In WD for ABAP, I encounter many troubles when trying to transfer a table with hierarchical column to an actual tree. In my UI, the tree is in the left pane and the detail for a node is in the right.
    1. With hierarchical table, I can bind the dataSource of the table directly to the context node in View which is mapped to a context node in component controller which is a DDIC structure. But with a tree, is it possible to map the dataSource of the tree or node in tree to context node in component controller?
    2. With hierarchical table, when lead selection changed, the detail info in the right pane will change automatically. But with a tree, it seems to have to be implemented via action by myself.
    Can anyone give some suggestions to overcome these troubles?

    Wei,
    <i>With tree table, the dataSource of the table is just bound to the context node ORG_UNIT</i>
    Hmmm... You said, that this node resides in component controller, not in view controller. Or does it?
    Any recursive node that can be displayed with tree table can be displayed with tree as well. Period.
    1. Create tree and bind data source to the same node as table data source.
    2. Create tree node type and bind it to the same node again.
    3. Create one additional calculated attribute in forementioned node (type boolean) that inverts/negates attribute for table column "hasChildren" to tree node "isLeaf".
    4. That's all.
    VS

  • ASSM and table with LOB column

    I have a tablespace created with ASSM option. I've heard about that tables with LOB columns can't take the advantage of ASSM.
    I made a test : create a table T with BLOB column in a ASSM tablespace. I succeeded!
    Now I have some questions:
    1. Since the segments of table T can't use ASSM to manage its blocks, what's the actrual approach? The traditional freelists??
    2. Will there be some bad impacts on the usage of the tablespace if table T becomes larger and larger and is used frequently?
    Thanks in advance.

    Can you explain what you mean by #1 because I believe it is incorrect and it does not make sense in my personal opinion. You can create a table in an ASSM tablespace that has a LOB column from 9iR2 on I believe (could be wrong). LOBs don't follow the traditional PCTFREE/PCTUSED scenario. They allocate data in what are called "chunks" that you can define at the time you create the table. In fact I think the new SECUREFILE LOBs actually require ASSM tablespaces.
    HTH!

Maybe you are looking for