Oracle 8i datatypes in bytes
Hi,
If you're able to help, I'm interested in finding out the size (in BYTES) of the following Oracle 8i datatypes:
BLOB
DATE
NUMBER
NUMBER(10,0)
FLOAT
LONG
VARCHAR2
Example, for CHAR(size) the number of bytes equal to size.
Thanking you in advance for your time.
Regards
SoR
null
Use the SQL function VSIZE.
SELECT ename, VSIZE (ename) "BYTES"
FROM emp
WHERE deptno = 10;
ENAME BYTES
CLARK 5
KING 4
MILLER 6
Similar Messages
-
JDBC Thin Client and Oracle Long Datatype
I am using Wepshere 4.0.2 , JDBC 2.0 (thin driver) and oracle 9i.
I have a procedure which takes Oracle Long Datatype as its parameter.
I use following code to execute procedure.
String dataforsql="AAA000000003 123123 07/01/200301/01/2003";
byte[] bytes = dataforsql.getBytes();
InputStream is = new ByteArrayInputStream(bytes);
cstmt=conn.prepareCall("call nscw.CPPF_SAVEPDCRAWTABLE2(?,?,?)");
cstmt.setAsciiStream (1, is,bytes.length);
The above code works perfectly for data upto 4000 bytes. Once the data crosses the 4000 mark.
i get a procedure error
ORA-01460: unimplemented or unreasonable conversion requestedcstmt.setAsciiStream (1, is,bytes.length);Oracle's support for CLOB (and BLOB) columns using set{Ascii,Binary}Stream() generally s*cks. You'll have to read Oracle's own JDBC manual (you can read it online at http://technet.oracle.com) for whatever sequence they recommend.
E.g. for insertion and updation of CLOBS, you're supposed to use an oracle-specific function (EMPTY_CLOB()) as the value in the INSERT/UPDATE statement, and then do a SELECT, getClob(), and use Clob APIs to update the actual column value. At least officially. Or you have to use some Oracle-specific APIs in oracle.sql.Connection and oracle.sql.CLOB. -
Retrieve xml data from a relational table(oracle) with datatype as xmltyp
Hello Avijit, any resolution for this issue?
hi .... I am trying to retrieve xml data from a relational table with datatype as xmltyp. The SQ is retrieving rows but the xml parser give transformation error . The transformation retrieve xml data from a relational table(oracle) with datatype as xmltyp returned a row error status on receiving an input row on group retrieve xml data from a relational table(oracle) with datatype as xmltyp. ERROR : An XML document was truncated and thus not processed. Input row from SQ_XMLTYPE_TEST: Rowdata: ( RowType=0(insert) Src Rowid=5 Targ Rowid=5 DOCUMENT (DataInput:Char.64000:): "<?xml version='1.0' encoding='UTF-8'?><main><DATA_RECORD> <OFFER_ID>434345</OFFER_ID> <ADDR>sec -2 salt lake</ADDR> <CITY>kolkata</CITY> (DISPLAY TRUNCATED)(TRUNCATED)" ) thanks in advance Avijit
-
CMP Bean's Field Mapping with oracle unicode Datatypes
Hi,
I have CMP which mapps to RDBMS table and table has some Unicode datatype such as NVARCHAR NCAR
now i was woundering how OC4J/ oracle EJB container handles queries with Unicode datatypes.
What i have to do in order to properly develope and deploy CMP bean which has fields mapped onto the data based UNICODE field.?
Regards
atifBased on the sun-cmp-mapping file descriptor
<schema>Rol</schema>
It is expected a file called Rol.schema is packaged with the ejb.jar. Did you perform capture-schema after you created your table? -
Oracle BLOB Writes the Bytes Twice to Output Stream
Hi,
I have a very strange problem when working with oracle.sql.BLOB; I cannot figure out what it's causing my BLOB stream output to double the amount of data inserted into the Oracle database. I have a table that contains two BLOB objects(image files) and the goal is to insert two images into each row by BLOB stream.
For example, if the image_bin size is 800k and image_thumbnail size is 100k, this code actually writes 1600k (double) and 200k (double) the amount of bytes to each BLOB column, respectively. The print method in insertBlob() indicates a correct number of bytes being written to the output stream (800k and 100k).
I know for the fact the retrieval method (not mentioned here) doesn't duplicate the bytes when it's read because I have written another test program that does not utilize oracle.sql.BLOB but instead uses PreparedStatement's setBinaryStream(index, InputStream, size_of_file) and it accurately writes the exact image size (no double sizing) to the database -- but not with BLOB. Here's a snippet of my code, note that the actual writing occurs in insertBlob():
private void insertBlob(java.sql.Blob lobImage, String imgName)
throws SQLException, IOException {
File imgFile = null;
FileInputStream imgOnDisk = null;
OutputStream imgToDB = null;
int bufferSize = 0;
oracle.sql.BLOB blobImage = (oracle.sql.BLOB) lobImage;
try {
int bytesRead = 0;
long bytesWritten = 0L;
byte[] byteBuffer = null;
bufferSize = blobImage.getBufferSize();
byteBuffer = new byte[bufferSize];
imgFile = new File(imgName);
// Stream to read the file from the local disk
imgOnDisk = new FileInputStream(imgFile);
// Stream to write to the Oracle database
imgToDB = blobImage.setBinaryStream(imgFile.length());
// Read from the disk file and write to the database
while ((bytesRead = imgOnDisk.read(byteBuffer)) != -1 ) {
imgToDB.write(byteBuffer, 0, bytesRead);
bytesWritten += bytesRead;
} // end of while
System.out.print("Done. " + bytesWritten + "-bytes inserted, buffer size: " +
bufferSize + "-bytes, chunk size: " +
blobImage.getChunkSize() + ".\n");
} catch (SQLException sqlEx) {
System.out.println("SQLException caught: JDBCOracleLOBBinaryStream.processBlob()");
connRollback();
throw sqlEx;
} catch (IOException ioe) {
System.out.println("IOException caught: JDBCOracleLOBBinaryStream.processBlob()");
throw ioe;
} finally {
try {
if (imgOnDisk != null ) {
imgOnDisk.close();
if (imgToDB != null ) {
imgToDB.close();
} catch (IOException ioeClosing) {
System.out.println("IOException caught: JDBCOracleLOBBinaryStream.processBlob() " +
"on closing stream.");
ioeClosing.printStackTrace();
} // end of finally
public void insertImageIntoOracleDB() throws SQLException, IOException {
PreparedStatement pstmt = null;
Statement stmt = null;
ResultSet rset = null;
try {
this.getConnection(_driver, host, port, database, user, _pass);
pstmt = conn.prepareStatement("INSERT INTO " +
" gallery_v (picture_id, picture_title, image_bin, image_thumbnail) " +
" VALUES (?, ?, EMPTY_BLOB(), EMPTY_BLOB())");
pstmt.setInt(1, picID);
pstmt.setString(2, picTitle);
pstmt.executeUpdate();
stmt = conn.createStatement();
rset = stmt.executeQuery("SELECT image_bin, image_thumbnail FROM gallery_v " +
" WHERE picture_id = " + picID + " FOR UPDATE");
int rsetCount = 0;
oracle.sql.BLOB imgBlob = null;
oracle.sql.BLOB imgThumbBlob = null;
while (rset.next()) {
imgBlob = ((OracleResultSet) rset).getBLOB("image_bin");
System.out.print("Inserting " + img + "... ");
insertBlob(imgBlob, img);
imgThumbBlob = ((OracleResultSet) rset).getBLOB("image_thumbnail");
System.out.print("Inserting " + imgThumb + "... ");
insertBlob(imgThumbBlob, imgThumb);
rsetCount++;
System.out.println("\nNumber of rows updated: " + rsetCount);
conn.commit();
} catch (SQLException sqlEx) {
System.out.println("SQLException caught: JDBCOracleLOBBinaryStream.insertImageIntoOracleDB()");
connRollback();
throw sqlEx;
} catch (IOException ioe) {
throw ioe;
} finally {
try {
if (rset != null) {
rset.close();
if (pstmt != null) {
pstmt.close();
if (stmt != null) {
stmt.close();
closeConnection();
} catch (SQLException closingSqlEx) {
System.out.println("SQLException caught: JDBCOracleLOBBinaryStream.insertImageIntoOracleDB() " +
"on closing ResultSet or PreparedStatement.");
closingSqlEx.printStackTrace();
} // end of finally
}Make a lumpy mistake; the new BLOB#setBinaryStream() method takes a position of where the data is read from in the stream given to it. So the following code:
imgToDB = blobImage.setBinaryStream(imgFile.length());
Starts off from the end of the file. Now I don't understand how this position would result in the duplicated amount of bytes read from the binary file (an image here) to the output stream! The correct line should be:
imgToDB = blobImage.setBinaryStream(0L);
ARGH!!! Now everything works as it expected. I gotta read the API's more carefully as I was expecting the same semantic parameter as PreparedStatement#setBinaryStream() which takes the length of the stream as one of its parameters. -
Oracle Spatial Datatypes and Dataguard 11g
Hi
Does anyone know if the following datatypes are supported in dataguard 11g for logical standby or physical standby?
SDO_Geometry
SDO_georaster
thanks
sunirA correction. Logical standby (SQL Apply) doesn't currently natively support spatial data types but sdo_geometry can be supported thru EDS by creating a logging table and two triggers and some manual work. Please check it out from My Oracle Support:
Note 559353.1 - Extended Datatype Support (EDS) for SQL Apply. "Extended Datatype Support (EDS) is the ability for SQL Apply to extend support for additional datatypes that are not natively supported yet by SQL Apply. EDS relies on a new capability in Oracle Database 10g Release 2 Patch Set 3 (10.2.0.4) and Oracle Database 11g Release 1 Patch Set 1 (11.1.0.7) that allows triggers to fire on the logical standby database. "
Note 565074.1 - EDS for SQL Apply Example - Oracle Spatial Type SDO_GEOMETRY
thanks
Jeffrey -
To increase the oracle VARCHAR datatype column length
Hi,
I need to increase the VARCHAR datatype column length from VARCHAR2(16) TO VARCHAR2(100) on Production environment.
Kindly let us know the impact on Production and also required for downtime to proceed the activity.
Please find the details as below,
DB Version: Oracle 11g R2 (11.2.0.3)
OS : AIX 6.1
If you need further information, kindly let me know.
Thanks & Regards,
RajHi,
Better to move your question in General Category.
Thanks -
Oracle generic datatype that can be cast as other datatype ?
I have a function that currently returns a DATE datatype but want to provide functionality so that depending on the value of a parameter it will return either a DATE or a VARCHAR2. For example
FUNCTION AddDate(p_dt_DateIn IN DATE, p_vch_DType IN VARCHAR2)
RETURN ????? IS
IF UPPER(p_vch_DType) = 'C' THEN
RETURN TO_CHAR(p_dt_DateIn + 2, 'DD Mon YYYY')
ELSE
RETURN p_dt_DateIn + 2
END IF;
The calling procedure would then cast the returned type.
Does a generic datatype like this exist in Oracle ? I've been looking at overloading using packages but it basically boils down to using two seperate functions for each returned datatype which i really don't want to do.Normally, we would implement this as overloaded functions in a package:
CREATE PACKAGE my_package AS
FUNCTION addDate (p_dt_DateIn IN DATE) RETURN DATE;
FUNCTION addDate (p_vc_DateIn IN VARCHAR2) RETURN VARCHAR2;
END;the point being that in the package body the varchar version can call the date version, so you actually only code the process once.
The advantage of this approach is that the calling program doesn't have to cast the result, because Oracle uses the right function for the parameter data type.
Cheers, APC -
Oracle Long dataType = a long text String?
Hi all.
I'm in the middle of writing a servlet that uses JDBC to connect to an Oracle DB. I need a field in Oracle to store a varchar2 type String without the 4000 characters limiation. I picked Long datatype as it 'stores CHAR data of variable seize up to 2 GB' as said in documentations.
The problem is, I don't know how to put it in the prepareStatement. Would it be a prep.setString or something else? I would be grateful if anyone can tell me what I can do and whether if setting a Long is correct.
Connection con = DriverManager.getConnection(url, loginPwd, loginPwd);
PreparedStatement prep = con.prepareStatement(INSERT INTO chanEvents VALUES (?,?,?,?));
prep.setInt(1, nextVal);
prep.setString(2, eventName);
prep.setString(3, dateString);
//?? the following is where Oracle column is set to LONG.
//Should I use setString or something else?
// let say longString is a very long String
prep.setString(4, longString);
if (prep.executeUpdate() != 1) {
out.println("Bad update");
con.close();
prep.close();
return false;
con.close();
prep.close();http://technet.oracle.com/sample_code/tech/java/sqlj_jdbc/htdocs/templates.htm#Streams
Jamie -
ORACLE TIMESTAMP DataType support in Toplink ?
Currently we have an application that need to create timestamps with precision up to thousands of a second.
Do you know of any other customer that have similar requirements and how they solve this problem ?
We try changing the SQL DDL to change datatype from DATE to TIMESTAMP(3) which can support a timestamp of 1/1000 seconds precision.
We find that if our Oracle column is defined as Oracle DATE, the last 3 digit will be drop and cause us some duplicate key exception for records that
Get inserted within 1 second because the timestamp is part of the primary key.
ts '2004-03-12 17:13:27.792'
So we change the ORACLE column from DATE to TIMESTAMP(3)
What we find is that Toplink produce this exception
Exception [TOPLINK-3001] (OracleAS TopLink - 10g (9.0.4) (Build 031126)): oracle.toplink.exceptions.ConversionException
Exception Description: The object [oracle.sql.TIMESTAMP@321b5e39], of class [class oracle.sql.TIMESTAMP], could not be converted to [class java.util.Date].
at oracle.toplink.exceptions.ConversionException.couldNotBeConverted(ConversionException.java:35)
at oracle.toplink.internal.helper.ConversionManager.convertObjectToUtilDate(ConversionManager.java:679)
at oracle.toplink.internal.helper.ConversionManager.convertObject(ConversionManager.java:97)
at oracle.toplink.internal.databaseaccess.DatabasePlatform.convertObject(DatabasePlatform.java:55
Than we try to change our java code and modify the java instance variable type from java.util.Date to java.sql.Timestamp
And we get the following error
Exception [TOPLINK-3001] (OracleAS TopLink - 10g (9.0.4) (Build 031126)): oracle.toplink.exceptions.ConversionException
Exception Description: The object [oracle.sql.TIMESTAMP@731de027], of class [class oracle.sql.TIMESTAMP], could not be converted to [class java.sql.Timestamp].
at org.apache.xerces.impl.XMLNamespaceBinder.endElement(XMLNamespaceBinder.java:650)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1011)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1564)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:335)
We cannot seems to find in toplink mapping workbench how to specify timestamp
========================================================================================================
The TIMESTAMP Datatype
The new TIMESTAMP datatype is almost identical to DATE and differs in only one way:
TIMESTAMPs can represent fractional seconds.
The granularity of a TIMESTAMP value can be as little as a billionth of a second, whereas
DATE variables can only resolve time to the second.
When you declare a TIMESTAMP variable, you can optionally specify the precision that you wish to use for fractional seconds. The default precision is to the millisecond (six decimal digits); the maximum precision is to the billionth of a second (nine decimal digits).
===========================================================================================================
-----Original Message-----
From: Cheung, Ka-Kit
Sent: Friday, March 12, 2004 6:20 PM
To: Burr, Tim; Julian, Robert; Matthiesen, Sean
Cc: Tsounis, George; Del Rosso, Peter; Cham, Mei
Subject: Problem identified : AddressDetail duplicate key problem
If we look at the exact of the insert statement.
We see that the last address detail insert have key of
Address ID = '5a052407-dac6-42ad-bbbf-29edc94488c1', and
TransactionStartDate = {ts '2004-03-12 17:13:27.792'},
While in the database, we look like we have an entry of
Address ID = '5a052407-dac6-42ad-bbbf-29edc94488c1', and
TransactionStartDate = {ts '2004-03-12 17:13:27.229'},
If my memory served me right, while
{ts '2004-03-12 17:13:27.792'}, is different than {ts '2004-03-12 17:13:27.229'},
because are Java timestamps that have precison up to MicroSeconds, therefore 229 is different than 792.
However, when this timestamp is saved to Oracle, I believe (have to check with Mei) that oracle only takes
Up to '2004-03-12 17:13:27 and discard the 229 or 792 because that is the maximum precision of timestamp for oracle.
So we have the second insert have the same '2004-03-12 17:13:27 after stripping off the 792 and we have a same record with the same same '2004-03-12 17:13:27 in the database and
Therefore causing duplicate key exception.
That is why this is happen only once in a while when 2 rapid fire inserts happen in less than 1 second of each other.
The solution actually is in the ESS code itselfs.
The current ESS code will send addDependentToClient multiple times, one for each dependent added
On the screen.
The right way is to add all the dependent on the screen all at once.
To have course grain method like addDependentsToClient, and have a collection or array of dependents as input parameter.
This way we are not causing the participant to create history of themselves multiple times within a very short period of time. It save disk space, conform to a single UOW per submit and that is what I proposed
To solve this problem from the root cause is by enhancing the method to save multiple dependents in one shot rather than a loop of multiple calls.
KK
and
INSERT INTO PTTCBSI.ADDRESS_DETAIL
(LINE_3_AD, ADR_TRAN_UNTIL_DT, MODIFY_DT, CITY_NM, POSTAL_CD, VER_ID, POSTAL_EXT_CD, LINE_2_AD, ADR_TRAN_START_DT, CREATE_DT, AUTHOR_ID, ADDRESS_ID, LINE_1_AD, COUNTY_NM, LINE_4_AD, COUNTRY_ID, STATE_ID)
VALUES ('Block 5, Apt. 6', {ts '9999-12-31 00:00:00.0'},
{ts '2004-03-12 17:13:26.385'},
'Oakwood', '61043', 1, '1234', 'Mailstop 820',
{ts '2004-03-12 17:13:26.385'},
{ts '2004-03-12 16:50:12.0'}, 'dataLoad',
'5a052407-dac6-42ad-bbbf-29edc94488c1',
'IBM Corp.', NULL, '140 Main Street', 'US', 'NJ')
UnitOfWork(1238222885)--Connection(2102560837)--
UPDATE PTTCBSI.ADDRESS_DETAIL
SET ADR_TRAN_UNTIL_DT = {ts '2004-03-12 17:13:26.385'}, VER_ID = 2 WHERE
(((ADDRESS_ID = '5a052407-dac6-42ad-bbbf-29edc94488c1') AND
(ADR_TRAN_START_DT = {ts '2004-03-12 16:52:29.0'})) AND (VER_ID = 1))
UPDATE PTTCBSI.ADDRESS_DETAIL SET
ADR_TRAN_UNTIL_DT = {ts '2004-03-12 17:13:27.229'}, VER_ID = 2
WHERE (((ADDRESS_ID = '5a052407-dac6-42ad-bbbf-29edc94488c1') AND (ADR_TRAN_START_DT = {ts '2004-03-12 17:13:26.0'})) AND (VER_ID = 1))
UnitOfWork(102762535)--Connection(2102560837)--
INSERT INTO PTTCBSI.ADDRESS_DETAIL
(LINE_3_AD, ADR_TRAN_UNTIL_DT, MODIFY_DT, CITY_NM, POSTAL_CD, VER_ID, POSTAL_EXT_CD, LINE_2_AD, ADR_TRAN_START_DT, CREATE_DT, AUTHOR_ID, ADDRESS_ID, LINE_1_AD, COUNTY_NM, LINE_4_AD, COUNTRY_ID, STATE_ID) VALUES
('Block 5, Apt. 6', {ts '9999-12-31 00:00:00.0'}, {ts '2004-03-12 17:13:27.229'}, 'Oakwood', '61043', 1, '1234', 'Mailstop 820',
{ts '2004-03-12 17:13:27.229'},
{ts '2004-03-12 16:50:12.0'}, 'dataLoad',
'5a052407-dac6-42ad-bbbf-29edc94488c1',
'IBM Corp.', NULL, '140 Main Street', 'US', 'NJ')
INSERT INTO PTTCBSI.ADDRESS_DETAIL
(LINE_3_AD,
ADR_TRAN_UNTIL_DT,
MODIFY_DT,
CITY_NM, POSTAL_CD, VER_ID, POSTAL_EXT_CD, LINE_2_AD,
ADR_TRAN_START_DT,
CREATE_DT,
AUTHOR_ID,
ADDRESS_ID,
LINE_1_AD, COUNTY_NM, LINE_4_AD, COUNTRY_ID, STATE_ID) VALUES
('Block 5, Apt. 6', {ts '9999-12-31 00:00:00.0'},
{ts '2004-03-12 17:13:27.792'},
'Oakwood', '61043', 1, '1234',
'Mailstop 820',
{ts '2004-03-12 17:13:27.792'},
{ts '2004-03-12 16:50:12.0'},
'dataLoad',
'5a052407-dac6-42ad-bbbf-29edc94488c1',
'IBM Corp.', NULL, '140 Main Street', 'US', 'NJ')
ClientSession(790235177)--Connection(2102560837)--rollback transaction
ORA-00001: unique constraint (PTTCBSI.PK_ADDRESS_DETAIL) violatedKK,
We are back-porting the support for oracle.sql.TIMESTAMP to 9.0.4 in an upcoming patch-set. It is possible to enhance TopLink using a customer conversion manager or database platform to add this support if required in the short term.
Doug -
Oracle Spatial datatype problems - HELP!!
Can anyone help with this problem. I am using Pro*C/C++ to create a DLL which I am then
linking to another C based application. I am attempting to use Pro*C/C++ to read/write
Oracle Spatial Objects. I have no problem writing Pro*C/C++ code or accessing the
functions in the DLL. My problem is returning values from the DLL functions to my
application. So far the only values I seem able to return are character strings.
Below are two versions of the same function. One that returns a char string and a second
that returns nothing. I would like the second function to work and to return an integer
value for "g_type". Any ideas??? By the way "g_type" is the return value in question. I am
using Oracle Pro*C/C++ 8.1.5 and the V8 Oracle Call Interface for the datatype conversion.
Header files for the Object types have been generated by the Object Type Translator.
THIS FUNCTION WORKS AS EXPECTED. THE PARAMETER G_TYPE RETURNS THE ANTICIPATED STRING VALUE
int Read_Geometry
int gid,
char *g_type,
char *errMsg
char err_msg[128];
sword retcode;
int tester;
size_t buf_len, msg_len;
SDO_GEOMETRY geom = (SDO_GEOMETRY )0;
SDO_GEOMETRY_ind geom_ind = (SDO_GEOMETRY_ind )0;
exec sql at db_name allocate :geom:geom_ind;
exec sql at db_name select geometry into :geom:geom_ind from test81 where gid=:gid;
if (SQLCODE !=0)
exec sql whenever sqlerror continue;
buf_len = sizeof (err_msg);
sqlglm(err_msg, &buf_len, &msg_len);
strcpy(errMsg,err_msg);
return ERROR;
else
retcode = OCINumberToInt(err,&geom->sdo_gtype, sizeof(g_type), OCI_NUMBER_SIGNED,&tester);
if (retcode == OCI_ERROR)
sprintf(errMsg,"Convert failed");
return ERROR;
else
sprintf(g_type,"%d", tester);
sprintf(errMsg,"gType=%s",g_type);
return SUCCESS;
return SUCCESS;
THIS FUNCTION DOES NOT WORK. THE FUNCTION EXECUTES BUT THE PARAMETER G_TYPE RETURNS NOTHING?????????
int Read_Geometry
int gid,
int g_type,
char *errMsg
char err_msg[128];
sword retcode;
size_t buf_len, msg_len;
SDO_GEOMETRY geom = (SDO_GEOMETRY )0;
SDO_GEOMETRY_ind geom_ind = (SDO_GEOMETRY_ind )0;
exec sql at db_name allocate :geom:geom_ind;
exec sql at db_name select geometry into :geom:geom_ind from test81 where gid=:gid;
if (SQLCODE !=0)
exec sql whenever sqlerror continue;
buf_len = sizeof (err_msg);
sqlglm(err_msg, &buf_len, &msg_len);
strcpy(errMsg,err_msg);
return ERROR;
else
retcode = OCINumberToInt(err,&geom->sdo_gtype, sizeof(g_type), OCI_NUMBER_SIGNED,&g_type);
if (retcode == OCI_ERROR)
sprintf(errMsg,"Convert failed");
return ERROR;
else
return SUCCESS;
return SUCCESS;
Header file is as follows
struct SDO_GEOMETRY
OCINumber sdo_gtype;
OCINumber sdo_srid;
struct SDO_POINT_TYPE sdo_point;
SDO_ELEM_INFO_ARRAY * sdo_elem_info;
SDO_ORDINATE_ARRAY * sdo_ordinates;
typedef struct SDO_GEOMETRY SDO_GEOMETRY;
struct SDO_GEOMETRY_ind
OCIInd _atomic;
OCIInd sdo_gtype;
OCIInd sdo_srid;
struct SDO_POINT_TYPE_ind sdo_point;
OCIInd sdo_elem_info;
OCIInd sdo_ordinates;
typedef struct SDO_GEOMETRY_ind SDO_GEOMETRY_ind;Hi,
From a quick look I can't see anything wrong. You might want to compare this with the example in $ORACLE_HOME/md/demo/examples.
Also, note that in 9i there will probably be an occi (C++ OCI) available, should this be useful to you. -
Keep Oracle DATE datatype but insert via PreparedStatement with time
I know there are alot of messages concerning java.sql.Date and that it doesn't
hold the time.
Can you give me an example of how to insert Date and Time into an Oracle
"DATE" field using preparedStatements?
dailysunjava.util.Date d = new java.util.Date();
java.sql.Timestamp ts = new java.sql.Timestamp(d.getTime());
prepStatement.setTimestamp(1, ts); -
All,
I have a question.
I have a field in a table which is a number data type.
This field is of length '9'.
I have a java front end where if I give the last 4 digits of this field I should be able to pull record that matches.
But when I tested for '0000' I don't have any records pulled. Also when i tested for '0111' it did not take the leading '0' and it went to search for '111' and displayed 2 record that ended with 0111 and 1111.
My problem basically is the leading zeros are neglected.
How do i solve this problem? Please help.You just need to make sure that the variable type passed from jave is appropriate for the way you use it.
Passing a string variable, as Hoek suggested, it would be something like:
SQL> var strvar varchar2(4);
SQL> exec :strvar := '0000'
PL/SQL procedure successfully completed.
SQL> WITH sample_data AS (
2 SELECT 20110102345678 id, 324560000 pinnumber, 'AAA' firstname, 'ZZZ' lastname
3 FROM dual UNION ALL
4 SELECT 20110102345679, 324560111, 'BBB', 'ZZZ' FROM dual UNION ALL
5 SELECT 20110102345680, 324561111, 'CCC', 'ZZZ' FROM dual UNION ALL
6 SELECT 20110102345681, 324561234, 'DDD', 'ZZZ' FROM dual)
7 SELECT * FROM sample_data
8 WHERE SUBSTR(TO_CHAR(pinnumber, 'fm999999999'), -4) = :strvar
ID PINNUMBER FIR LAS
20110102345678 324560000 AAA ZZZ
SQL> exec :strvar := '0111'
PL/SQL procedure successfully completed.
SQL> /
ID PINNUMBER FIR LAS
20110102345679 324560111 BBB ZZZ
SQL> exec :strvar := '1111'
PL/SQL procedure successfully completed.
SQL> /
ID PINNUMBER FIR LAS
20110102345680 324561111 CCC ZZZOr, as numeric variable, as Frank suggested, it would be like:
SQL> var numvar number;
SQL> exec :numvar := 0;
PL/SQL procedure successfully completed.
SQL> WITH sample_data AS (
2 SELECT 20110102345678 id, 324560000 pinnumber, 'AAA' firstname, 'ZZZ' lastname
3 FROM dual UNION ALL
4 SELECT 20110102345679, 324560111, 'BBB', 'ZZZ' FROM dual UNION ALL
5 SELECT 20110102345680, 324561111, 'CCC', 'ZZZ' FROM dual UNION ALL
6 SELECT 20110102345681, 324561234, 'DDD', 'ZZZ' FROM dual)
7 SELECT * FROM sample_data
8 WHERE MOD(pinnumber, 10000) = :numvar;
ID PINNUMBER FIR LAS
20110102345680 324561111 CCC ZZZ
SQL> exec :numvar := 0111
PL/SQL procedure successfully completed.
SQL> /
ID PINNUMBER FIR LAS
20110102345679 324560111 BBB ZZZ
SQL> exec :numvar := 1111
PL/SQL procedure successfully completed.
SQL> /
ID PINNUMBER FIR LAS
20110102345680 324561111 CCC ZZZJohn -
Implementation of double byte character in oracle 10.2.0.5 database
Hi experts,
There is an oracle 10.2.0.5 standard edition database running on windows 2003 platform. The application team needs to add a column of the datatype double byte (chinese characters) to an existing table. The database character set is set to WE8ISO8859P1 and the national characterset is AL16UTF16. After going through the Oracle Documentation our DBA team found out that its possible to insert chinese characters into the table with the current character set.
The client side has the following details:
APIs used to write data--SQL Developer
APIs used to read data--SQL Developer
Client OS--Windows 2003
The value of NLS_LANG environment variable in client environment is American and the database character set is WE8ISO8859P1 and National Character set is AL16UTF16.
We have got a problem from the development team saying that they are not able to insert chinese characters into the table of nchar or nvchar column type. The chinese characters that are being inserted into the table are getting interpreted as *?*...
What could be the workaround for this ??
Thanks in advance...For SQL Developer, see my advices in Re: Oracle 10g - Chinese Charecter issue and Re: insert unicode data into nvarchar2 column in a non-unicode DB
-- Sergiusz -
Problem loading PostgreSQL Bytea data type to Oracle Raw data type
We are migrating our database from PostgreSQL to Oracle. First, we convert the BYTEA data type in PostgreSQL to Oracle RAW. The BYTEA data type is variable bytes array. How can we load the BYTEA data type to Oracle RAW data type? Or I have to convert to different data type. thanks.
Peter,hi,
Instead of 'interval day to second' in method declaration use internal datatype 'DSINTERVAL_UNCONSTRAINED'.
There are more unconstrained types in oracle.
Bartek
Maybe you are looking for
-
T61p will not boot (no fan, hard drive or drisplay response) after accidental drop
Hi there, I'm hoping that someone much more knowledgeable than me can help me with my problem. The other day, my laptop tumbled out of my trunk and onto the floor. The following morning, I tried booting the laptop and nothing happened. No hard dri
-
Any ideas?
-
Alv grid Should Display In Three Tabstrips
Hi Experts, Could you please tell me how to display alv in three tabstrips using oops abap Best answer rewarded Regards Fareedas
-
Enabling multitouch in adobe flash cs4
how can i develop multitouch in my adobe flash cs4? currently i am also using the flash develop IDE, my adobe flash cs4 does not have the ability to publish in flash player 10.3 which i think it is the player that supports multitouch?
-
Increase String Length for Plant Field (WERKS)
Hi SAP Gurus, Good day! My requirement is as follows: 1. I have around 20 companies with at least 50 "plants" per company 2. In my legacy system, I am using a 3-digit code to identify the unique "plants" within a particular company code 3. There are