Unable To Select From SQL Server table with more than 42 columns
I have set up a link between a Microsoft SQL Server 2003 database and an Oracle 9i database using Heterogeneous Services (HSODBC). It's working well with most of the schema I'm selecting from except for 3 tables. I don't know why. The common denominator between all the tables is that they all have at least 42 columns each, two have 42 columns, one has 56, and the other one, 66. Two of the tables are empty, one has almost 100k records, one has has 170k records. So I don't think the size of the table matters.
Is there a limitation on the number of table columns you can select from through a dblink? Even the following statement errors out:
select 1
from "Table_With_42_Cols"@sqlserver_db
The error message I get is:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message [Generic Connectivity Using ODBC]
ORA-02063: preceding 2 lines from sqlserver_db
Any assistance would be greatly appreciated. Thanks!
Not a very efficient and space friendly design to do name-value pairs like that.
Other methods to consider is splitting those 1500 parameters up into groupings of similar parameters, and then have a table per group.
Another option would be to use "vertical table partitioning" (as oppose to the more standard horizontal partitionining provided by the Oracle partition option) - this can be achieved (kind of) in Oracle using clusters.
Sooner or later this name-value design is going to bite you hard. It has 1500 rows where there should be only 1 row. It is not scalable.. and as you're discovering, it is unnatural to use. I would rather change that table and design sooner than later.
Similar Messages
-
Migrating from Sql Server tables with column name starting with integer
hi,
i'm trying to migrate a database from sqlserver but there are a lot of tables with column names starting with integer ex: *8420_SubsStatusPolicy*
i want to make an offline migration so when i create the scripts these column are created with the same name.
can we create rules, so when a column like this is going to be migrated, to append a character in front of it?
when i use Copy to Oracle option it renames it by default to A8420_SubsStatusPolicy
Edited by: user8999602 on Apr 20, 2012 1:05 PMHi,
Oracle doesn't allow object names to start with an integer. I'll check to see what happens during a migration about changing names as I haven't come across this before.
Regards,
Mike -
Migrating CMP EJB mapped to SQL Server table with identity column from WL
Hi,
I want to migrate an application from Weblogic to SAP Netweaver 2004s (7.0). We had successfully migrated this application to an earlier version of Netweaver. I have a number of CMP EJBs which are mapped to SQL Server tables with the PK as identity columns(autogenerated by SQL Server). I am having difficulty mapping the same in persistant.xml. This scenario works perfectly well in Weblogic and worked by using ejb-pk in an earlier version of Netweaver.
Please let me know how to proceed.
-SekharI suspect it is the security as specified in the message. E.g .your DBA set the ID columns so no user can override values in it.
And I suggest you 1st put the data into a staging table, then push it to the destination, this does not resolve the issue, but ensures better processing.
Arthur
MyBlog
Twitter -
How to convert from SQL Server table to Flat file (txt file)
I need To ask question how convert from SQL Server table to Flat file txt file
Hi
1. Import/Export wizened
2. Bcp utility
3. SSIS
1.Import/Export Wizard
First and very manual technique is the import wizard. This is great for ad-hoc and just to slam it in tasks.
In SSMS right click the database you want to import into. Scroll to Tasks and select Import Data…
For the data source we want out zips.txt file. Browse for it and select it. You should notice the wizard tries to fill in the blanks for you. One key thing here with this file I picked is there are “ “ qualifiers. So we need to make
sure we add “ into the text qualifier field. The wizard will not do this for you.
Go through the remaining pages to view everything. No further changes should be needed though
Hit next after checking the pages out and select your destination. This in our case will be DBA.dbo.zips.
Following the destination step, go into the edit mappings section to ensure we look good on the types and counts.
Hit next and then finish. Once completed you will see the count of rows transferred and the success or failure rate
Import wizard completed and you have the data!
bcp utility
Method two is bcp with a format file http://msdn.microsoft.com/en-us/library/ms162802.aspx
This is probably going to win for speed on most occasions but is limited to the formatting of the file being imported. For this file it actually works well with a small format file to show the contents and mappings to SQL Server.
To create a format file all we really need is the type and the count of columns for the most basic files. In our case the qualifier makes it a bit difficult but there is a trick to ignoring them. The trick is to basically throw a field into the
format file that will reference it but basically ignore it in the import process.
Given that our format file in this case would appear like this
9.0
9
1 SQLCHAR 0 0 """ 0 dummy1 ""
2 SQLCHAR 0 50 "","" 1 Field1 ""
3 SQLCHAR 0 50 "","" 2 Field2 ""
4 SQLCHAR 0 50 "","" 3 Field3 ""
5 SQLCHAR 0 50 ""," 4 Field4 ""
6 SQLCHAR 0 50 "," 5 Field5 ""
7 SQLCHAR 0 50 "," 6 Field6 ""
8 SQLCHAR 0 50 "," 7 Field7 ""
9 SQLCHAR 0 50 "n" 8 Field8 ""
The bcp call would be as follows
C:Program FilesMicrosoft SQL Server90ToolsBinn>bcp DBA..zips in “C:zips.txt” -f “c:zip_format_file.txt” -S LKFW0133 -T
Given a successful run you should see this in command prompt after executing the statement
Starting copy...
1000 rows sent to SQL Server. Total sent: 1000
1000 rows sent to SQL Server. Total sent: 2000
1000 rows sent to SQL Server. Total sent: 3000
1000 rows sent to SQL Server. Total sent: 4000
1000 rows sent to SQL Server. Total sent: 5000
1000 rows sent to SQL Server. Total sent: 6000
1000 rows sent to SQL Server. Total sent: 7000
1000 rows sent to SQL Server. Total sent: 8000
1000 rows sent to SQL Server. Total sent: 9000
1000 rows sent to SQL Server. Total sent: 10000
1000 rows sent to SQL Server. Total sent: 11000
1000 rows sent to SQL Server. Total sent: 12000
1000 rows sent to SQL Server. Total sent: 13000
1000 rows sent to SQL Server. Total sent: 14000
1000 rows sent to SQL Server. Total sent: 15000
1000 rows sent to SQL Server. Total sent: 16000
1000 rows sent to SQL Server. Total sent: 17000
1000 rows sent to SQL Server. Total sent: 18000
1000 rows sent to SQL Server. Total sent: 19000
1000 rows sent to SQL Server. Total sent: 20000
1000 rows sent to SQL Server. Total sent: 21000
1000 rows sent to SQL Server. Total sent: 22000
1000 rows sent to SQL Server. Total sent: 23000
1000 rows sent to SQL Server. Total sent: 24000
1000 rows sent to SQL Server. Total sent: 25000
1000 rows sent to SQL Server. Total sent: 26000
1000 rows sent to SQL Server. Total sent: 27000
1000 rows sent to SQL Server. Total sent: 28000
1000 rows sent to SQL Server. Total sent: 29000
bcp import completed!
BULK INSERT
Next, we have BULK INSERT given the same format file from bcp
CREATE TABLE zips (
Col1 nvarchar(50),
Col2 nvarchar(50),
Col3 nvarchar(50),
Col4 nvarchar(50),
Col5 nvarchar(50),
Col6 nvarchar(50),
Col7 nvarchar(50),
Col8 nvarchar(50)
GO
INSERT INTO zips
SELECT *
FROM OPENROWSET(BULK 'C:Documents and SettingstkruegerMy Documentsblogcenzuszipcodeszips.txt',
FORMATFILE='C:Documents and SettingstkruegerMy Documentsblogzip_format_file.txt'
) as t1 ;
GO
That was simple enough given the work on the format file that we already did. Bulk insert isn’t as fast as bcp but gives you some freedom from within TSQL and SSMS to add functionality to the import.
SSIS
Next is my favorite playground in SSIS
We can do many methods in SSIS to get data from point A, to point B. I’ll show you data flow task and the SSIS version of BULK INSERT
First create a new integrated services project.
Create a new flat file connection by right clicking the connection managers area. This will be used in both methods
Bulk insert
You can use format file here as well which is beneficial to moving methods around. This essentially is calling the same processes with format file usage. Drag over a bulk insert task and double click it to go into the editor.
Fill in the information starting with connection. This will populate much as the wizard did.
Example of format file usage
Or specify your own details
Execute this and again, we have some data
Data Flow method
Bring over a data flow task and double click it to go into the data flow tab.
Bring over a flat file source and SQL Server destination. Edit the flat file source to use the connection manager “The file” we already created. Connect the two once they are there
Double click the SQL Server Destination task to open the editor. Enter in the connection manager information and select the table to import into.
Go into the mappings and connect the dots per say
Typical issue of type conversions is Unicode to non-unicode.
We fix this with a Data conversion or explicit conversion in the editor. Data conversion tasks are usually the route I take. Drag over a data conversation task and place it between the connection from the flat file source to the SQL Server destination.
New look in the mappings
And after execution…
SqlBulkCopy Method
Sense we’re in the SSIS package we can use that awesome “script task” to show SlqBulkCopy. Not only fast but also handy for those really “unique” file formats we receive so often
Bring over a script task into the control flow
Double click the task and go to the script page. Click the Design script to open up the code behind
Ref.
Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/ -
Select from SQL Server Activity
Hi,
I'm using the Select from SQL Server Activity in CPO to look up some values in a table, straightforward SQL queries like select * from Table work 100% but when I start applying a WHERE clause to the query it fails to return the results even though when running query in SQL studio it returns results successfully.
The issue seems to be when trying to use brackets in a OR statement:
The SQL Statement is:
SELECT AppName FROM METADaaS WHERE
NCCX NOT LIKE '%18421%'
AND RTYPE='A'
AND BUNDLEDWITH LIKE '%GOL Back Office%'
AND (NCC = '0' OR NCC LIKE '%18421%')
This returns no results in CPO but in SQL Studio returns the results as expected.
If I remove the "(NCC = '0' OR NCC LIKE '%18421%')" I get results in CPO.
thanksAny SQL statement should work. I would suggest you open a TAC case.
-
Table with more than 35 columns
Hello All.
How can one work with a table with more than 35 columns
on JDev 9.0.3.3?
My other question is related to this.
Setting Entities's Beans properties from a Session Bean
bought up the error, but when setting from inside the EJB,
the bug stays clear.
Is this right?
Thank youThank you all for reply.
Here's my problem:
I have an AS400/DB2 Database, a huge and an old one.
There is many COBOL Programs used to communicate with this DB.
My project is to transfer the database with the same structure and the same contents to a Linux/ORACLE System.
I will not remake the COBOL Programs. I will use the existing one on the Linux System.
So the tables of the new DB should be the same as the old one.
That’s why I can not make a relational DB. I have to make an exact migration.
Unfortunately I have some tables with more than 5000 COLUMNS.
Now my question is:
can I modify the parameters of the ORACE DB to make it accept Tables and Views with more than 1000 columns, If not, is it possible to make a PL/SQL Function that simulate a table, this function will insert/update/select data from many other small tables (<1000 columns). I want to say a method that make the ORACLE DB acting like if it has a table with a huge number of columns;
I know it's crazy but any idea please. -
Row chaining in table with more than 255 columns
Hi,
I have a table with 1000 columns.
I saw the following citation: "Any table with more then 255 columns will have chained
rows (we break really wide tables up)."
If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
I tried to insert a row described above and no row chaining occurred.
As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
the block size OR when more than 255 columns are populated. Am I right?
Thanks
dyahavuser10952094 wrote:
Hi,
I have a table with 1000 columns.
I saw the following citation: "Any table with more then 255 columns will have chained
rows (we break really wide tables up)."
If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
I tried to insert a row described above and no row chaining occurred.
As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
the block size OR when more than 255 columns are populated. Am I right?
Thanks
dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
"Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
And this paraphrase from "Practical Oracle 8i":
V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
Related information may also be found here:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
"When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
http://download.oracle.com/docs/html/B14340_01/data.htm
"For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
Why not a test case?
Create a test table named T4 with 1000 columns.
With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
SPOOL C:\TESTME.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL2,
COL3,
COL255,
COL256,
COL257)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=1000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWhat are the results of the above?
Before the insert:
NAME VALUE
table fetch continue 166
After the insert:
NAME VALUE
table fetch continue 166
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 332 Another test, this time with an average row length of about 12 bytes:
DELETE FROM T4;
COMMIT;
SPOOL C:\TESTME2.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL256,
COL257,
COL999)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=100000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
Before the insert:
NAME VALUE
table fetch continue 332
After the insert:
NAME VALUE
table fetch continue 332
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 33695The final test only inserts data into the first 4 columns:
DELETE FROM T4;
COMMIT;
SPOOL C:\TESTME3.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL2,
COL3,
COL4)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=100000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWhat should the 'table fetch continued row' show?
Before the insert:
NAME VALUE
table fetch continue 33695
After the insert:
NAME VALUE
table fetch continue 33695
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
"Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case. -
Compressed tables with more than 255 columns
hi,
Would anyone have a sql to find out compressed tables with more than 255 columns.
Thank you
JonuSELECT table_name,
Count(column_name)
FROM user_tab_columns utc
WHERE utc.table_name IN (SELECT table_name
FROM user_tables
WHERE compression = 'ENABLED')
HAVING Count(column_name) > 255
GROUP BY table_name -
Error while running spatial queries on a table with more than one geometry.
Hello,
I'm using GeoServer with Oracle Spatial database, and this is a second time I run into some problems because we use tables with more than one geometry.
When GeoServer renders objects with more than one geometry on the map, it creates a query where it asks for objects which one of the two geometries interacts with the query window. This type of query always fails with "End of TNS data channel" error.
We are running Oracle Standard 11.1.0.7.0.
Here is a small script to demonstrate the error. Could anyone confirm that they also have this type of error? Or suggest a fix?
What this script does:
1. Create table object1 with two geometry columns, geom1, geom2.
2. Create metadata (projected coordinate system).
3. Insert a row.
4. Create spacial indices on both columns.
5. Run a SDO_RELATE query on one column. Everything is fine.
6. Run a SDO_RELATE query on both columns. ERROR: "End of TNS data channel"
7. Clean.
CREATE TABLE object1
id NUMBER PRIMARY KEY,
geom1 SDO_GEOMETRY,
geom2 SDO_GEOMETRY
INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
VALUES
'OBJECT1',
'GEOM1',
2180,
SDO_DIM_ARRAY
SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
VALUES
'OBJECT1',
'GEOM2',
2180,
SDO_DIM_ARRAY
SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
INSERT INTO object1 VALUES(1, SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(550000, 450000, NULL), NULL, NULL));
CREATE INDEX object1_geom1_sidx ON object1(geom1) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
CREATE INDEX object1_geom2_sidx ON object1(geom2) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
SELECT *
FROM object1
WHERE
SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
SELECT *
FROM object1
WHERE
SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE' OR
SDO_RELATE("GEOM2", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
DELETE FROM user_sdo_geom_metadata WHERE table_name = 'OBJECT1';
DROP INDEX object1_geom1_sidx;
DROP INDEX object1_geom2_sidx;
DROP TABLE object1;
Thanks for help.This error appears in GeoServer and SQLPLUS.
I have set up a completly new database installation to test this error and everything works fine. I tried it again on the previous database but I still get the same error. I also tried to restart the database, but with no luck, the error is still there. I geuss something is wrong with the database installation.
Anyone knows what could cause an error like this "End of TNS data channel"? -
Tables with more than one cell with a high number of rowspan (ej. 619). This cell can not be print in a page and it must be cut. But I don’t know how indesign can do this action.
set the wake-on lan on the main computer
The laptop's too far away from the router to be connected by ethernet. It's all wifi.
No separate server app on the laptop, it's all samba
The files are on a windows laptop and a hard drive hooked up to the windows laptop. The windows share server is pants, so I'd need some sort of third party server running. Maybe you weren't suggesting to use Samba to connect to the windows share though?
I'm glad that you've all understood my ramblings and taken and interest, thanks The way I see it, I can't be the only netbook user these days looking for this kind of convenience, and I certainly won't be once chrome and moblin hit the market.
Last edited by saft (2010-03-18 20:38:08) -
I want to enable a PK in a tables with more than 1680 subpartitions
Hi All
i want to enable a PK in a tables with more than 1680 subpartitions
SQL> ALTER SESSION ENABLE PARALLEL DDL ;
Session altered.
SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8;
alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8
ERROR at line 1:
ORA-00933: SQL command not properly ended
SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8 nologging;
alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8 nologging
ERROR at line 1:
ORA-00933: SQL command not properly ended
SQL> alter table FDW.GL_JE_LINES_BASE_1 parallel 8;
Table altered.
SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK;
alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK
ERROR at line 1:
ORA-01652: unable to extend temp segment by 128 in tablespace TS_FDW_DATA
Please advice or tell how it would be the best way to do this
Regards
JesusWhen you try to create a PK you automaticaly created an index. If you want to put this index in different tablespace you should use 'using index..' option when you create this primary key.
-
How can we create a table with more than 64 fields in the default DB?
Dear sirs,
I am taking part in the process of migrating a J2ee application from JBoss to SAP Server. I have imported the ejb project.
I have an entity bean with 79 CMP fields. i have created the bean and created the table for the same also. but when i tried to build the dictionary, i am getting an error message as given below,
"Dictionary Generation: DB2:checkNumberOfColumns (primary key of table IMP_MANDANT): number of columns (79) greater than allowed maximum (64) IMP_MANDANT.dtdbtable MyAtlasDictionary/src/packages"
Is it mean that we can not create tables with fields more than 64?
How can i create tables with more than 64 fields?
Kindly help,
Thankyou,
Sudheesh...Hi,
I found a link in the help site which says its 1024 (without key 1023).
http://help.sap.com/saphelp_nw04s/helpdata/en/f6/069940ccd42a54e10000000a1550b0/content.htm
Not sure about any limit of 64 columns.
Regards,
S.Divakar -
How to read an internal table with more than one (2 or 3) key field(s).
how to read an internal table with more than one (2 or 3) key field(s). in ecc 6.0 version
hi ,
check this..
report.
tables: marc,mard.
data: begin of itab occurs 0,
matnr like marc-matnr,
werks like marc-werks,
pstat like marc-pstat,
end of itab.
data: begin of itab1 occurs 0,
matnr like mard-matnr,
werks like mard-werks,
lgort like mard-lgort,
end of itab1.
parameters:p_matnr like marc-matnr.
select matnr
werks
pstat
from marc
into table itab
where matnr = p_matnr.
sort itab by matnr werks.
select matnr
werks
lgort
from mard
into table itab1
for all entries in itab
where matnr = itab-matnr
and werks = itab-werks.
sort itab1 by matnr werks.
loop at itab.
read table itab1 with key matnr = itab-matnr
werks = itab-werks.
endloop.
regards,
venkat. -
General Scenario- Adding columns into a table with more than 100 million rows
I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
Thanks in advance.
svkFor such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
table). Add the default after all the rows have a value for the new column. -
Spatial index creation for table with more than one geometry columns?
I have table with more than one geometry columns.
I'v added in user_sdo_geom_metadata table record for every column in the table.
When I try to create spatial indexes over geometry columns in the table - i get error message:
ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA table
ORA-13203: failed to read USER_SDO_GEOM_METADATA table
ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 8
ORA-06512: at line 1
What is the the solution?I'v got errors in my user_sdo_geom_metadata.
The problem does not exists!
Maybe you are looking for
-
Conversion Error in JCO due to BigDecimal.toString() change in Java 1.5
Hi folks, we have a problem here. A field which is set through a java.math.BigDecimal(0) gets very funny values in JCo. We had a look insight JCo und found, that JCo.Record does the following with BigDecimals: if ((value instanceof BigDecimal) && ity
-
New user account. help pls.
after filling up the fields on creating new account then click the "create account", the new account should show up under the "other accounts" but it doesn't show up. Am i missing something here, help please. thanks.
-
JMS Adapter Destination Name value
Hi I am trying to connect to a WLS remote distributed queue using the JMS Adapter but I am struggling to find the correct format for the destination name property as each time it gives various JMSExceptions stating that it cannot find the queue. Belo
-
i am getting an error code of "0" when trying to buy an in app purchase. what is this error?
-
Use of proxy media in viewing, then exporting projects from FCPx
I am trying to get my head around the use of proxy media in FCPx and would appreciate feedback Up till now, having used H.264 files from various cameras, I have not had the need to use proxy files. I get smooth playback etc, no issues. Now, however,