Small table with many histograms
In 10gR2 EE I took Oracle's default's when executing exec DBMS_STATS.GATHER_SCHEMA_STATS('xxxx');
One small table has 748 rows with 12 columns. View DBA_TAB_HISTOGRAMS shows 352 rows for the table. The space used in the data dictionary for this table's statistics is probably almost as much as for the data in the table. It seems to me that I do not need this many histograms. How can I reduce their number?
Thank you,
Bill
Tubby,
Yes, I realize that I am asking a question about something that would usually be ignored. This database stores information for an application that was written in the days of rule-based optimization. For years the software vendor who wrote it advised us not so switch it to cost-based optimization. They changed their position and now support the RBO to CBO switch.
Disk space is an issue for us so I am more aware than most of the potential for growth of data dictionary objects that will come with maintaining CBO statistics. Later this year we will buy new hardware that will include significantly more disk space, so the objects in SYSTEM and SYSAUX will be free to grow as needed. In 2008 we tried this same switch and saw significant growth of objects in those tablespaces. An issue within the application forced us to switch back to RBO then. Repairing the damage caused by that problem took a lot of man hours.
There are hundreds of small tables in this database similar to the one I described in my original note. I was concerned about the combined effect of maintaining stats for so many tables. I originally set method_opt to "FOR ALL INDEXED COLUMNS SIZE AUTO" out of concern for space. I later changed that and let it take the default of "FOR ALL COLUMNS SIZE AUTO". Yes, I am trying the change in a test environment.
Perhaps I should have included more background information in my original post, but I thought that few people would be willing to read a lengthy posting.
Thanks,
Bill
Similar Messages
-
Hi,
I have a table with many LONG fields (28). So far, everythings works fine.
However, if I add another LONG field I cannot insert a dataset anymore
(29 LONG fields).
Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
Thanks in advance
Michael
appendix:
- Create and Insert command and error message
- MaxDB version and its parameters
Create and Insert command and error message
CREATE TABLE "DBA"."AZ_Z_TEST02"
"ZTB_ID" Integer NOT NULL,
"ZTB_NAMEOFREPORT" Char (400) ASCII DEFAULT '',
"ZTB_LONG_COMMENT" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_00" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_01" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_02" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_03" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_04" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_05" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_06" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_07" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_08" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_09" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_10" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_11" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_12" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_13" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_14" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_15" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_16" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_17" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_18" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_19" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_20" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_21" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_22" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_23" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_24" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_25" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_26" LONG ASCII DEFAULT '',
PRIMARY KEY ("ZTB_ID")
The insert command
INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
works fine. If I add the LONG field
"ZTB_LONG_TEXTBLOCK_27" LONG ASCII DEFAULT '',
the following error occurs:
Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
MaxDB version and its parameters
All db params given by
dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
are
KERNELVERSION KERNEL 7.5.0 BUILD 026-123-094-430
INSTANCE_TYPE OLTP
MCOD NO
RESTART_SHUTDOWN MANUAL
SERVERDBFOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 10
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 LOG_001
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 64000
DATA_VOLUME_NAME_0001 DAT_0001
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0001 64000
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 22
MULTIO_BLOCK_CNT 4
DELAYLOGWRITER 0
LOG_IO_QUEUE 50
RESTARTTIME 600
MAXCPU 1
MAXUSERTASKS 50
TRANSRGNS 8
TABRGNS 8
OMSREGIONS 0
OMSRGNS 25
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
ROWRGNS 8
MINSERVER_DESC 16
MAXSERVERTASKS 20
_MAXTRANS 288
MAXLOCKS 2880
LOCKSUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
USEASYNC_IO YES
IOPROCSPER_DEV 1
IOPROCSFOR_PRIO 1
USEIOPROCS_ONLY NO
IOPROCSSWITCH 2
LRU_FOR_SCAN NO
PAGESIZE 8192
PACKETSIZE 36864
MINREPLYSIZE 4096
MBLOCKDATA_SIZE 32768
MBLOCKQUAL_SIZE 16384
MBLOCKSTACK_SIZE 16384
MBLOCKSTRAT_SIZE 8192
WORKSTACKSIZE 16384
WORKDATASIZE 8192
CATCACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 1632
INIT_ALLOCATORSIZE 229376
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
TASKCLUSTER01 tw;al;ut;2000sv,100bup;10ev,10gc;
TASKCLUSTER02 ti,100dw;30000us;
TASKCLUSTER03 compress
MPRGN_QUEUE YES
MPRGN_DIRTY_READ NO
MPRGN_BUSY_WAIT NO
MPDISP_LOOPS 1
MPDISP_PRIO NO
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 0
MPRGN_PRIO NO
MAXRGN_REQUEST 300
PRIOBASE_U2U 100
PRIOBASE_IOC 80
PRIOBASE_RAV 80
PRIOBASE_REX 40
PRIOBASE_COM 10
PRIOFACTOR 80
DELAYCOMMIT NO
SVP1_CONV_FLUSH NO
MAXGARBAGECOLL 0
MAXTASKSTACK 1024
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
DWIO_AREA_SIZE 50
DWIO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
FBMLOW_IO_RATE 10
CACHE_SIZE 10000
DWLRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
DATACACHE_RGNS 8
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 11
SEQUENCE_CACHE 1
IDXFILELIST_SIZE 2048
SERVERDESC_CACHE 73
SERVERCMD_CACHE 21
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
READAHEADBLOBS 25
RUNDIRECTORY E:\_mp\u_v_dbs\EVERW_C5
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 0
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 648
EXTERNAL_DUMP_REQUEST NO
AKDUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
UTILITYPROTFILE dbm.utl
UTILITY_PROTSIZE 100
BACKUPHISTFILE dbm.knl
BACKUPMED_DEF dbm.mdf
MAXMESSAGE_FILES 0
EVENTALIVE_CYCLE 0
_SHAREDDYNDATA 10280
_SHAREDDYNPOOL 3607
USE_MEM_ENHANCE NO
MEM_ENHANCE_LIMIT 0
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-13 13:47:17
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
DIAGSEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 21333
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_JOIN_OUTER YES
JOIN_OPERATOR_IMPLEMENTATION IMPROVED
JOIN_TABLEBUFFER 128
OPTIMIZE_FETCH_REVERSE YES
SET_VOLUME_LOCK YES
SHAREDSQL NO
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FORBID_LOAD_BALANCING NO>
Lars Breddemann wrote:
> Hi Michael,
>
> this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
> Really.
>
> Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
> Anyhow, when I use
>
> insert into "AZ_Z_TEST02" values (87,'','','','','','','','','','','','','','','',''
> ,'','','','','','','','','','','','','','','','')
>
> it works fine.
It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
>
Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
>
> Now to the other errors:
> - 28 Long values per row?
> What the heck is wrong with the data design here?
> Honestly, you can save data up to 2 GB in a BLOB/CLOB.
> Currently, your data design allows 56 GB per row.
> Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
>
> - The "ZTB_NAMEOFREPORT" looks like something the users see -
> still there is no unique constraint preventing that you get 10000 of reports with the same name...
You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
(These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
>
- MaxDB 7.5 Build 26 ?? Where have you been the last years?
> Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
> With 7.6. I was not able to reproduce your issue at all.
The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
>
- Are you really putting your data into the DBA schema? Don't do that, ever.
> DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
> Create a user/schema for your application data and put your tables into that.
>
> KR Lars
In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
Michael -
Sparse table with many columns
Hi,
I have a table that contains around 800 columns. The table is a sparse table such that many rows
contain up to 50 populated columns (The others contain NULL).
My questions are:
1. Table that contains many columns can cause a performance problem? Is there an alternative option to
hold table with many columns efficiently?
2. Does a row that contains NULL values consume storage space?
Thanks
dyahav[NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
HTH! -
No records in Azure databrowser viewing tables with many columns.
Yesterday I encountered an issue while browsing a table created in Azure.
I created a new database in Azure and in this database I created and populated several tables, containing one very big table.
The big table has 239 columns.
I succeeded in populating this table with our in-company table-data, means by a dtsx-package. No problem this far.
When I query the table from SQL Server Management Studio, I get correct results.
However, the databrowser on the azure-site itself does not show any data for this table. That’s a little disappointing regarding the fact that there are more than 76000 records in this table. Refresh didn’t help.
When I browse smaller tables with less data-columns, I do get data in this data-browser.
Is this a known issue or do you know a solution for this issue ?
Kind regards,
Fred Silven
AEB Amsterdam
The Netherlands.Hello,
Based on your description, you want to edit data of a large table in the Management Portal of SQL database, but it is not return rows in GUI Design tab. Can you get the data when select "TOP 200 rows"?
Since there are 239 columns and 76000 rows in the table, the Portal may take a bit long time to load all data in GUI. Please try to using T-SQL statement to perform the select or update operation and specify condition in WHERE clause to load the
needed data.
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support -
Simultaneous hash joins of the same large table with many small ones?
Hello
I've got a typical data warehousing scenario where a HUGE_FACT table is to be joined with numerous very small lookup/dimension tables for data enrichment. Joins with these small lookup tables are mutually independent, which means that the result of any of these joins is not needed to perform another join.
So this is a typical scenario for a hash join: the lookup table is converted into a hashed map in RAM memory, fits there without drama cause it's small and a single pass over the HUGE_FACT suffices to get the results.
Problem is, so far as I can see it in the query plan, these hash joins are not executed simultaneously but one after another, which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.
Questions:
- is my interpretation correct that the mentioned joins are sequential, not simultaneous?
- if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?
Please note that the parallel execution of a single join at a time is not the matter of the question.
Database version is 10.2.
Thank you very much in advance for any response.user13176880 wrote:
Questions:
- is my interpretation correct that the mentioned joins are sequential, not simultaneous?Correct. But why do you think this is an issue? Because of this:
which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.That is (should not be) true. Oracle does one pass of the big table, and then sequentually joins to each of the hashmaps (of each of the smaller tables).
If you show us the execution plan, we can be sure of this.
- if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?Yes there is. But again you should not need to resort to such a solution. What you can do is use subquery factoring (WITH clause) in conjunction with the MATERIALIZE hint to first construct the cartesian join of all of the smaller (dimension) tables. And then join the big table to that. -
Query performance on same table with many DML operations
Hi all,
I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
If i created same table again newly with same data and fire the same select statement, it is taking less time.
My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
Thanks in advance,
PalTry searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
For example if you had a table like this where seq_no is populated by a sequence and indexed
seq_no NUMBER
processed_flag VARCHAR2(1)
trans_date DATEand then did deletes like:
DELETE FROM t
WHERE processed_flag = 'Y' and
trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
HTH
John -
Select count(x) on a table with many column numbers?
Hi all,
i have a table with physical data with 850 (!!) colums and
~1 Million rows.
The select count(cycle)from test_table Statement is very, very slow
WHY?
The select count(cycle)from test_table is very fast by e.g 10 Colums. WHY?
What has the number of columns, to do with the SELECT count(cyle).... statement?
create test_table(
cycle number primary key,
stamp date,
sensor 1 number,
sensor 2 number,
sensor_849 number,
sensor_850 number);
on W2K Oracle 9i Enterprise Edition Release 9.2.0.4.0 Production
Can anybody help me?
Many Thanks
Achimhi lennert, hi all,
many thanks for all the answers. I�m not an Oracle expert.
Sorry for my english.
Hi Lennert,
you are right, what must i do to use the index in the
query? Can you give me a pointer of direction, please?
Many greetings
Achim
select count(*) from w4t.v_tfmc_3_blocktime;
COUNT(*) ==> Table with 3 columns (very fast)
306057
Ausf�hrungsplan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _BLOCKTIME'
Statistiken
0 recursive calls
0 db block gets
801 consistent gets
794 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
select count(*) from w4t.v_tfmc_3_value;
COUNT(*)==> Table with 850 columns (very slow)
64000
Ausf�hrungsplan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _VALUE'
Statistiken
1 recursive calls
1 db block gets
48410 consistent gets
38791 physical reads
13068 redo size
387 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed -
Advanced table with many columns
I have a advanced table with more or less 60 columns and the table is greater than the header image. How can I align the table with the advanced table?
Thanks.
Edited by: Vieira on 3-giu-2010 1.01bedabrata wrote:
Hi,
I had a question. Can we fix the size of the lolumns in a table region. I was unable to do so using the property width of the items. So tried the below code:
// Set width for all the columns
OATableBean tableBean = (OATableBean)webBean.findIndexedChildRecursive("HeaderDetailsRN") ;
OAMessageStyledTextBean ColumnStyle;
ColumnStyle = (OAMessageStyledTextBean)tableBean.findIndexedChildRecursive("SblOrderNumber");
CSSStyle cellUpperCase = new CSSStyle();
cellUpperCase.setProperty("width","150px");
if (ColumnStyle != null)
ColumnStyle.setInlineStyle(cellUpperCase);
Now the width is getting set for the cloumn which has values (data) but it is not setting the width for columns which has null value or no data.
Could any of you guide me on how to set the width for every column of a table region.
Thanks
BedabrataOpen an other topic for this. ;-)
Edited by: Vieira on 3-giu-2010 9.27 -
Query Builder - How to create a link between tables with many fields?
I have many fields in my tables. When the query builder loads the tables, the tables are expanded to accomodate all the fields. Suppose I want to link Table A's Customer ID (the first field in Table A) wiith Table B's Customer ID (the last field in Table B). How can I do that if the last field in Table B are not visible in the screen?
Currently, I create a link in Table A's customer with a random field in Table B. Then I edit the link to create a proper condition. Is there a more efficient way to do this?
Thanks.
Edited by: woro2006 on Apr 19, 2011 9:40 AMHi woro2006 -
Easiest way is to grab Table A's title bar & drag Table A down the page until the columns you want to link are visible.
FYI, there is an outstanding bug
Bug 10215339: 30EA1: MISSING THE 2.1 RIGHT CLICK OPTIONS ON DATA FIELDS TO CREATE A LINK
to add a context menu on the field for this. That is, Link {context field} to > {other data sources} > {fields from that source}
It is being considered for 3.1, but I have no idea where it will end up in the priority queue.
Brian Jeffries
SQL Developer Team
P.S.: Arghh, Unfortunately, I just tried it and the diagram does not auto scroll while you drag, so there is some guess work/repositioning the view involved.
Logged Bug 12380154 - QUERY BUILDER DIAGRAM DOES NOT AUTO SCROLL WHEN DRAGGING TABLE -
How to create query for tables with many to many relationship
in my sql i'm unable to update the table using select clause...is there any way to update a table which is in many to many relationship.
Ex:1st table student(student_id int primary key auto_increment, student_name varchar(30));
2nd table contact (contact_id int primary key auto_increment, c_email varchar(40));
3rd table student_contact(student_id int references student, contact_id int references contact);
i would like to auto insert the both two columns in the student_contact while the student and the contact table being updated automatically.This is a JSP/JSTL forum, not a SQL forum. If you're using MySQL, better use their own forums at dev.mysql.com.
I'll give some hints anyway: learn about SQL JOIN. In general there is good SQL documentation available at the website of the RDBMS manfacturer. Go check it out. There is also a nice SQL tutorial at w3schools.com. Good luck. -
Use a common table with many to many relationship
Hello,
I have two SQL tables: Job and Employee. I need to compare Job Languages Proficiencies and Employee Languages Proficiencies. A Language Proficiency is composed by a Language and a Language Level.
create table dbo.EmployeeLanguageProficiency (
EmployeeId int not null,
LanguageProficiencyId int not null,
constraint PK_ELP primary key clustered (EmployeeId, LanguageProficiencyId)
create table dbo.JobLanguageProficiency (
JobId int not null,
LanguageProficiencyId int not null,
constraint PK_JLP primary key clustered (JobId, LanguageProficiencyId)
create table dbo.LanguageProficiency (
Id int identity not null
constraint PK_LanguageProficiency_Id primary key clustered (Id),
LanguageCode nvarchar (4) not null,
LanguageLevelId int not null,
constraint UQ_LP unique (LanguageCode, LanguageLevelId)
create table dbo.LanguageLevel (
Id int identity not null
constraint PK_LanguageLevel_Id primary key clustered (Id),
Name nvarchar (80) not null
constraint UQ_LanguageLevel_Name unique (Name)
create table dbo.[Language]
Code nvarchar (4) not null
constraint PK_Language_Code primary key clustered (Code),
Name nvarchar (80) not null
My question is about LanguageProficiency table. I added an Id has PK but I am not sure this is the best option.
What do you think about this scheme?
Thank You,
MiguelThat should be find as you alread have unique non clustered index on the other two columns. you can also define this way
create table dbo.LanguageProficiency (
Id int identity not null,
LanguageCode nvarchar (4) not null,
LanguageLevelId int not null,
constraint PK_LanguageProficiency_Id primary key Nonclustered(LanguageCode, LanguageLevelId)
Create CLUSTERED index CL_ID on dbo.LanguageProficiency(ID)
What you is need is -- FOreign key constraints between these tables. I see that you are columns but are not enforcing Foreign key constraints.. you need those..
Also you have composite primary keys, unless you guarantee that they will ever increasing numbers, it is not useful to define a composite PK.. it leads to page spilts...
Hope it Helps!! -
Right way to preserve all parent table entries in a join with many tables!!
This problem is quite interesting to me. I have asked this question to others but no body is able to provide me with proper answers.
The problem is: How do I join a huge parent table with many child tables (more than 5 child tables) preserving all of the parent table entries. Lets say there is the parent table parentTable and three child tables childTable1, childTable2, childTable3. In order to get the data after joining these tables the query that I have been using was:
select parent.field1, parent.field2, parent.field3, child1.field4, child1.field5, child2.field6, child3.field7 from ParentTable parent, childTable1 child1, childTable1 child2, childTable3 child3 where parent.fielda = child1.fieldb and parent.fieldc = child.fieldd and parent.fielde = child.fieldf.
Although the tables are huge (more than 100,000 entries), this query is very fast, however those parent table entries which do not have child entries are lost. I know that I can left join a parent table with a child table and then with the next child table and then with the next child table and continue. Isn't there a simple solution for this commonly happening problem?
Please provide suggestions please...Hello Lakshmi,
Although I do not know exactly how to achieve what you want to achieve but I have seen DBAs/ABAPers in my experience can run queries/scripts using the COUNT function to give the actual number of line items per table for all your 100 or odd tables.
Rgds
CONMJI -
I can no longer tab between cells in a table with pages 5.0.
I use pages everyday for filling in my work booking sheets. I need to have them synching to the cloud, so I can access the client information while on the road.
My booking sheet is basically a large table with many cells for name, address, phone number, etc. In pages 4.3 I could enter some information into the cell, hit the tab button, and it would go straight to the next cell in the table. This was a quick and easy way for me to fill out my forms on the computer.
Well, since the update, the tab button only moves the cursor over in the same cell, as opposed to the next cell. It is VERY rare to need to tab in a cell, as opposed to going to the next cell.
I did figure out that I could Option+ Tab to move to the next cell, but it is very hard to change my procedure of tabbing after all this timing doing it the other way.
Am I just missing a preferance to tab between cells, instead of inside a cell?
Thanks
DaveOption-tab makes sense and can be done with the left hand's thumb/forefinger.
Using tab was always arse-backwards as it defeated the use of tab within the table cell.
Best to use both for what they always do:
Option tab or an arrow key (singular) to jump cells
tab to jump to the next tab in the text wherever it is within or without cells.
The contrary precedence was set by Microsoft and was a bad idea. In good UI you do not have shortucts reverse roles such that you need to pay excessive attention to the context.
Peter -
Problem creating Allocation Table with Reference to a PO
Dear Folks,
I am having problems creating an allocation table with reference to a PO in T-code WA01.
I read the SAP help that some prerequisites need to exist:
==> You can only reference order items flagged as being relevant to a stock split (the Allocation table relevant indicator in the additional item data).
Can anyone advice me where to find this stock split indicator?
Also, can anyone advice me how to reuse an allocation table? For example, I had previously created an allocation table with many articles and various allocation rules. I already generated follow on documents for this table.
Say after 2 weeks, I have the similar requirements that I can make use of the same table, only with minor adjustments to the quantity. How do I create a new allocation table using the existing allocation table data?
Thanks and Regards
JunwenAny idea please?
thanks -
How to create a ms-access table with java?
hi all
i've my application and i want to add the capability to creat an access file (.mdb) and then, via SQL , create a table, with many columns of many types, and with a primary key too.
i know that it's also an SQL problem, but i'm searching for it everywhere
thanx for your reply
sandroHi,
It would have been much better if you had specified your development environment- the database driver class depends on which environment you are working with. Forexample, if you are working with vj++6 you can make use of the com.ms.jdbc.odbc.JdbcOdbcDriver class. If you are using IBM's Visual Age for Java3.5 you can find the sun.jdbc.odbc.JdbcOdbcDriver. Oneway or another you should have the .class for jdbc-odbc(usu they have the form: xxx:jdbc:odbc:JdbcOdbcDriver). Check all the packages that are available in your development environment that have the form xxx.jdbc.odbc.JdbcOdbcDriver.(And not clear what you know and what you don't - so I start from the very elementary steps)
Anyways, What you have to know is that it is not possible to create databases (e.g. .mdb) directly from a Java application(as far as I know). What is possible is to create new tables inside an already created database and process queries based on those tables.In short what I am saying is : you need to have a DSN before writing applications that create tables.
Follow the following steps to create DSN(for win2000):
1)Go to the control panel and click the 'administrative tools'
2)In the 'administrative tools' click to open ODBC(data sources)
3)Click the 'add' button and choose 'Microsoft Access driver'
4)In the DSN text field enter the dsn (e.g., Test)
5)If you want to create a table in an already existing database choose select and select one. However, if you want to create a new database click 'create' and enter a name for your database(e.g. ExampleDB.mdb). If you do this successfully it will issue a successfull operation message.
6)Click advanced and enter the login name(e.g. Albert) and password(e.g. mxvdk) for the database
7)Click 'ok's to finish the operation.
After the above three operations what you will have is an empty database(with no tables) named "ExampleDB.mdb" in the directory you specified.
Now, you can write a java application that creates a table inside the database "ExampleDB.mdb".
Check this out:
import java.sql.*;
public class Class1{
public Class1(){
String userName = "Albert";
String password = "mxvdk";
String dsn = "Test";
String databaseURL = "jdbc:odbc:"+dsn;
//This is just an sql table creating statement- have nothing to do with java
String sqlCreateStmt = "CREATE TABLE StudentTable" +
"(StudentID varchar(32) PRIMARY KEY," +
"name varchar(30)," +
"age int)";
try{
Class.forName("com.ms.jdbc.odbc.JdbcOdbcDriver");
}catch(ClassNotFoundException eCNF){
System.err.println("ClassNotFoundException:");
System.err.println(eCNF.getMessage());
try{
con = DriverManager.getConnection(databaseURL,userName,password);
stmt = con.createStatement();
stmt.executeUpdate(sqlCreateStmt);
}catch(SQLException e){
System.err.println("SQLException:");
e.printStackTrace();
//insert one sample data
insertSampleData();
private void insertSampleData(){
String sampleStudentID = "scr-342-tch";
String sampleStudentName = "Tom James";
int sampleStudentAge = 24;
//This is just an sql table updating statement- have nothing to do with java
String sqlUpdateStmt = "INSERT INTO StudentTable VALUES ('"+
sampleStudentID+"','"+
sampleStudentName+"',"+
sampleStudentAge+")";
try{
stmt.executeUpdate(sqlUpdateStmt);
}catch(SQLException e){
System.err.println("SQLException:");
e.printStackTrace();
public static void main(String[] args){
new Class1();
private Connection con;
private Statement stmt;
//This program runs perfectly in my VJ++6.0 (console application mode) and also in my IBM's
//visual Age for Java3.5 (with sun.jdbc.odbc.JdbcOdbcDriver as my database driver)
//If you are developing in another development environment, what you need to change is the
//"com.ms.jdbc.odbc.JdbcOdbcDriver" in Class.forName("com...") stmt.
//If you run this program more than once, it will issue 'tableAlreadyExists' message
If you still experience the problem, pls be specific and repost!
Maybe you are looking for
-
How can I re-install the OS with Deskstop Manager for Mac
I just downloaded the Deskstop manager for Mac, tried to upgrade by BB software and it gave me the JVM Error 513. Luckily I backed up all info cause I know that means the whole thing is erased. How do I on a Mac re-install the software? When I connec
-
Error while saving workbook or query
Hi guys. After upgrade to 7.0 we are experiencing error while saving workbook or query. We are using BEx 3.5. Here is the error: "Program error in class SAPMSSY1 method: UNCAUGHT_EXEPTION. An exeption with the type CX_SY_PROGRAM_NOT_FOUND occured, bu
-
In Apple store I was not able to update my software and ther was an error whenever i tried to update it ,error is 13
-
Screen effect in Photoshop.
Hi, everyone. I am trying to create a screen effect with Photoshop but can't seem to find the right way to do it. The screen I wish to create will be used to cover photos and give the illusion they are seen through screen door or something similar. I
-
Hello, We are in to SRM 4.0 extended classic scenario. Need to attach a document to shopping cart which eventually required to available with SRM PO for vendor to view. Could you please let me know how to make this functionality available in SRM 4.0