LARGE DATATYPE???
I have to store at least 46,000 characters in one field. I think CLOB would do it so that is the datatype I set in the field by doing:
CREATE TABLE mytable
(myfield clob);
I have a file that I am loading using sqlldr.
When I call sqlldr it still says that the field in the data file exceeds maximum length.
What can I do to get the field to be unlimited length?
I found a bulletin which said to do this:
Example:
load data
infile 'CLOB_TEST.dat' "str '<er>'"
into table CLOB_TEST
fields terminated by '<ec>'
COL1 CHAR(2147483647) -- This is going to the CLOB column , its maximum size is 2 Gig
So I modifed mine, but I have moer than one column,
so it's
myfield1,
myfield2,
myfield3 CHAR(2147483647),
and I get a core dump
HELP.
I'm fairly new to this and I have read stuff and gotten confused. How can I make it work?
Similar Messages
-
Var/adm/utmpx: value too large for defined datatype
Hi,
On a Solaris 10 machine I cannot use last command to view login history etc. It tells something like "/var/adm/utmpx: value too large for defined datatype".
The size of /var/adm/utmpx is about 2GB.
I tried renaming the file to utmpx.0 and create a new file using head utmpx.0 > utmpx but after that the last command does not show any output. The new utmpx file seems to be updating with new info though... as seen from file last modified time.
Is there a standard procedure to recreate a new utmpx file once it grows too largs?? I couldnt find much in man pages
Thanks in advance for any helpThe easiest way is to cat /dev/null to utmpx - this will clear out the file to 0 bytes but leave it intact.
from the /var/adm/ directory:
cat /dev/null > /var/adm/utmpx
Some docs suggest going to single user mode to do this, or stopping the utmp service daemon first, but I'm not positive this is necessary. Perhaps someone has input on that aspect. I've always just sent /dev/null to utmpx and wtmpx without a problem.
BTW - I believe "last" works with wtmpx, and "who" works with utmpx. -
Data of column datatype CLOB is moved to other columns of the same table
Hi all,
I have an issue with the tables having a CLOB datatype field.
When executing a simple query on a table with a column of type CLOB it returns error [POL-2403] value too large for column.
SQL> desc od_stock_nbcst_notes;
Name Null? Type
OD_STOCKID N NUMBER
NBC_SERVICETYPE N VARCHAR(40)
LANGUAGECODE N VARCHAR(8)
AU_USERIDINS Y NUMBER
INSERTDATE Y DATE
AU_USERIDUPD Y NUMBER
MODIFYDATE Y DATE
VERSION Y SMALLINT(4)
DBUSERINS Y VARCHAR(120)
DBUSERUPD Y VARCHAR(120)
TEXT Y CLOB(2000000000)
NBC_PROVIDERCODE N VARCHAR(40)
SQL> select * from od_stock_nbcst_notes;
[POL-2403] value too large for column
Checking deeply, some of the rows have got the data of the CLOB column moved in another column of the table.
When doing select length(nbc_providercode) the length is bigger than the datatype of the field (varchar(40)).
When doing substr(nbc_providercode,1,40) to see the content of the field, a portion of the Clob data is retrieved.
SQL> select max(length(nbc_providercode)) from od_stock_nbcst_notes;
MAX(LENGTH(NBC_PROVIDERCODE))
162
Choosing one random record, this is the stored information.
SQL> select length(nbc_providerCode), text from od_stock_nbcst_notes where length(nbc_providerCode)=52;
LENGTH(NBC_PROVIDERCODE) | TEXT
-------------------------+-----------
52 | poucos me
SQL> select nbc_providerCode from od_stock_nbcst_notes where length(nbc_providerCode)=52;
[POL-2403] value too large for column
SQL> select substr(nbc_providercode,1,40) from od_stock_nbcst_notes where length(nbc_providercode)=52 ;
SUBSTR(NBC_PROVIDERCODE
Aproveite e deixe o seu carro no parque
The content of the field is part of the content of the field text (datatype CLOB, containts an XML)!!!
The right content of the field must be 'MTS' (retrieved from Central DB).
The CLOB is being inserted into the Central DB, not into the Client ODB. Data is synchronized from CDB to ODB and the data is reaching the client in a wrong way.
The issue can be recreated all the time in the same DB, but between different users the "corrupted" records are different.
Any idea?939569 wrote:
Hello,
I am using Oracle 11.2, I would like to use SQL to update one column based on values of other rows at the same table. Here are the details:
create table TB_test (myId number(4), crtTs date, updTs date);
insert into tb_test(1, to_date('20110101', 'yyyymmdd'), null);
insert into tb_test(1, to_date('20110201', 'yyyymmdd'), null);
insert into tb_test(1, to_date('20110301', 'yyyymmdd'), null);
insert into tb_test(2, to_date('20110901', 'yyyymmdd'), null);
insert into tb_test(2, to_date('20110902', 'yyyymmdd'), null);
After running the SQL, I would like have the following result:
1, 20110101, 20110201
1, 20110201, 20110301
1, 20110301, null
2, 20110901, 20110902
2, 20110902, null
Thanks for your suggestion.How do I ask a question on the forums?
SQL and PL/SQL FAQ -
Hi all....
I must create a view to retrieve the data from a table "texto_email" ... and that table has a field named Texto - datatype CLOB. This field has some html definitions and formating tags.
but when i create the view, i get a error in plsql saying "view created with compilation errors"
when i coment " -- " the line responsable for the CLOB field, its created sucesful.
what I´m doing wrong? there´s a something diferent to do in this case?
that is my view... if someone can help...
CREATE OR REPLACE FORCE VIEW vw_texto_email
AS
SELECT DISTINCT
te.id_texto_email
,te.id_texto_email_tipo as id_tet
,te.data_cadastro
,te.data_inicio
,te.descricao
,te.assunto
,te.texto AS texto ---- that is the CLOB field -
,tet.descricao as tipo_email
FROM
texto_email te
,texto_email_tipo tet
WHERE
te.id_texto_email_tipo = tet.id_texto_email_tipo
tnks....To work with LOBs you have to know something about the concept of a LOB locator( a short tour in the documentation would be appreciable).
A LOB locator is a pointer to large object data in a database.
Database LOB columns store LOB locators, and those locators point to the real data stored in a LOB data.
To work with LOB data, you have to:
1) Retrieve LOB locator
2) Use a built-in package named DBMS_LOB to modify LOB
2.1 Open lob
2.2 ...
2.3 read/write LOB
2.4 close LOB
Why do you want to retrieve a lob locator in a view?
Stated that you could retreive its state in :
1 null
2 empty
3 populated with a valid pointer to data -
Using long vs. clob datatype with Oracle 8.1.7 and interMedia
I am trying to determine the best datatype to use for a column I
wish to search using the interMedia contains() function. I am
developing a 100% java application using Oracle 8.1.7 on Linux.
I'd prefer to use the standard JDBC API's for PreparedStatement,
and not have to use Oracle extensions if possible. I've
discovered that there are limitations in the support for LOB's
in Oracle's 100% java driver. The PreparedStatement methods
like setAsciiStream() and setCharacterStream() are documented to
have flaws that may result in the corruption of data. I have
also noticed that socket exceptions sometimes occur when a large
amount of data is transferred. If I use the long datatype for
my table column, the setCharacterStream() method seems to
transfer the data correctly. When I try to search this column
using the interMedia contains() function, I get strange
results. If I run my search on Oracle 8.1.6 for Windows, the
results seem to be correct. If I run the same search on Oracle
8.1.7 for Linux, the results are usually incorrect. The same
searches seem to work correctly on both boxes when I change the
column type from long to clob. Using the clob type may not be
an option for me since the standard JDBC API's to transfer data
into internal clob fields are broken, and I may need to stick
with standard JDBC API's. My customer wishes to purchase a
version of Oracle for Linux that will allow us to implement the
search capability he requires. Any guidance would be greatly
appreciated.I've finally solved it!
I downloaded the following jre from blackdown:
jre118_v3-glibc-2.1.3-DYNMOTIF.tar.bz2
It's the only one that seems to work (and god, have I tried them all!)
I've no idea what the DYNMOTIF means (apart from being something to do with Motif - but you don't have to be a linux guru to work that out ;)) - but, hell, it works.
And after sitting in front of this machine for 3 days trying to deal with Oracle's, frankly PATHETIC install, that's so full of holes and bugs, that's all I care about..
The one bundled with Oracle 8.1.7 doesn't work with Linux redhat 6.2EE.
Don't oracle test their software?
Anyway I'm happy now, and I'm leaving this in case anybody else has the same problem.
Thanks for everyone's help. -
CLOB Datatype (Assgin more than 32k fails)
Dear All
Can anyone tell me why i am getting this error if i assign more than 32k character to clob variable in pl/sql
but i can assign it from table to a variable
Pl/sql 1(ORA-06502: PL/SQL: numeric or value error: character string buffer too small)
declare
c clob;
v varchar(32767);
begin
for i in 1..90000 (just assuming it as 90k)
loop
v:=v||'x';
if length(v) > 31000 then
c:=c||v; -- here iam getting error while assigning character to clob if it is more than 32k
v:=null;
end if;
end loop;
c:=v;
end;
Pl/sql 2 (works fine when assgin from table)
declare
c clob;
begin
select clob_data into c from x; -- clob data is more than 32k;
end;
But it works fine with database 9i rel2 but in 11g i am facing this problem.
Regardshey U r getting the error because of v varchar(32767) , but clob data type can take large amont of data atleast 1000000 this much data can be stored in clob data type try it with this. or else see below
Hi see the example below
create table temp(col clob);
declare
v_string clob;
begin
for i in 1..1000000
v_string:=v_string||'A';
end loop;
insert into temp values(v_string);
end;
now
select length(col) from temp; you will get output as
length(col)
1000000
this means that clob datatype can store minimum of 1000000 character. -
Hi folks
I have two columns which I am querying and they are both of datatype CLOB and which I bring my sql in my development environment these two columns are being displayed as nulls eventhough my sql works fine in Toad env. So, I'm pretty sure its because of the datatype of these two columns so could someone please advise on how I convert these types to varchar please.
Thanks in advance
EmmaHi,
I had similar problem. My front end application is in Java. In my app. iam selecting the clob columns and converting the clob data into string (below is the method i used) before displaying it in my text area fields. Java string type can handle large amount of data. Converting the clob data into varchar within the pl/sql function will not work because we cann't assign large (clob) data to a varchar variable. Hope this helps.
public String clobToString(Clob clob) throws SQLException,IOException {
String textString = "";
if (clob != null)
// get stream from Clob Object
java.io.Reader clobReader = clob.getCharacterStream();
// find the length
int textLength = (int)clob.length();
// prepare a char array
char[] textCharArray = new char[textLength];
// read the data from the clob
clobReader.read(textCharArray, 0, textLength);
// convert char array to string
textString = new String(textCharArray);
// close the stream
clobReader.close();
return textString;
} -
SQL Error: ORA-12899: value too large for column
Hi,
I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
Error report:
SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
*Cause: An attempt was made to insert or update a column with a value
which is too wide for the width of the destination column.
The name of the column is given, along with the actual width
of the value, and the maximum allowed width of the column.
Note that widths are reported in characters if character length
semantics are in effect for the column, otherwise widths are
reported in bytes.
*Action: Examine the SQL statement for correctness. Check source
and destination column data types.
Either make the destination column wider, or use a subset
of the source column (i.e. use substring).
The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
To resolve the error the column "COL_XYZ" gets widened by:
alter table TAB_XYZ modify (COL_XYZ varchar2(10));
-alter table TAB_XYZ succeeded.
We now move the data from the source into the target table without problem and then run:
select max(length(COL_XYZ)) from TAB_XYZ;
-8
So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
alter table TAB_XYZ modify (COL_XYZ varchar2(8));
-Error report:
SQL Error: ORA-01441: cannot decrease column length because some value is too big
01441. 00000 - "cannot decrease column length because some value is too big"
*Cause:
*Action:
So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
Cheers.843217 wrote:
Note that widths are reported in characters if character length
semantics are in effect for the column, otherwise widths are
reported in bytes.You are looking at character lengths vs byte lengths.
The data in the column is a fixed length string of 8 characters.
select max(length(COL_XYZ)) from TAB_XYZ;
-8
So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
Use SQL Reference for datatype specification, length function, etc.
For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide. -
How to upload data to a "CLOB" datatype field in a database table
Using sqlldr what is the correct way to upload character data (far greater than the 4000 Bytes allowed in varchar2)?
setting the table field name datatype to clob and then using the field name against a large dataset in the control file I get the error that the input record (field) is to large.. adding "clob" after the table field name in the sqlldr control file just gives me a syntax error..
Running Oracle 9.2.0.6 on Solaris 9user511518,
I think you're in the wrong forum. Perhaps you should start with this Web page:
http://www.oracle.com/technology/products/database/utilities/index.html
Look for the SQL*Loader links.
Good Luck,
Avi. -
JQuery function stops working when trying to submit large amount of parameters
Hi,
Run CF9 on Windows 2008 R2 Server.
I have a cfm page with multiple dropdowns (some are cfselect) and dynamically generated lists of checkboxes interdependent on each other. I am using JQuery to submit data to cfc functions and to display data.
It was all working fine until we added new company with large number of records. This translated into large URL query string with a lot of parameters submitted for processing. That's when we started to have problems. I noticed when trying to directly submitting URL, if the total number of characters in URL is more than 2114 I get an error status code 302 Redirect and nothing is displayed.
I tried to play with postParametersLimit and postSizeLimit increasing to 1000.0 in neo-runtime.xml and restarting server but, this did not help.
Below is jquery function:
function populateBills(){
var plID;
if ($('#planenrolldate_id').val() == undefined)
plID = $('input[name=planenrolldate_id]').val();
else
plID = $('#planenrolldate_id').val();
var sID = $('#sponsor_id').val();
var pID = $('#plan_id').val();
var fromMonth = $('#from_month').val();
var fromYear = $('#from_year').val();
var toMonth = $('#to_month').val();
var toYear = $('#to_year').val();
$.ajax({
type:"POST",
url:"../components/billing/custompremstatus.cfc?method=GetBillsArr&planenrolldate_id=" + plID + "&sponsorid=" + sID + "&fM=" + fromMonth + "&fY=" + fromYear + "&tM=" + toMonth + "&tY=" + toYear,
dataType: "json",
success:
function(data){
$.each(data, function(index, item) {
addBillsCheckboxes(item.bill_id,item.bill_period);
}, //end the error function
error:
function(){
alert("An error has occurred while fetching bills");
} //end the error function
}); // end ajax call
} // end of functionThat's exactly from the console after I submit:
fM 9
fY 2014
method GetBillsArr
planenrolldate_id[] 564
planenrolldate_id[] 561
sponsor_id 59
tM 9
tY 2014
When I check just one planenrolldate_id then it looks like this
fM 9
fY 2014
method GetBillsArr
planenrolldate_id[] 561
sponsor_id 59
tM 9
tY 2014
This it cfc part:
<cffunction name="GetBillsArr" access="remote" returnFormat="json" returnType="array">
<cfargument name="sponsor_id" type="any" required="true">
<cfargument name="planenrolldate_id" type="any" required="true">
<cfargument name="fM" type="numeric" required="true">
<cfargument name="fY" type="numeric" required="true">
<cfargument name="tM" type="numeric" required="true">
<cfargument name="tY" type="numeric" required="true">
<cfset var result=ArrayNew(1)>
<cfset var i = 0>
<cfif ARGUMENTS.planenrolldate_id EQ "" OR ARGUMENTS.sponsor_id EQ "">
<cfset LstBills = QueryNew("bill_id, bill_period", "Integer, Varchar")>
<cfelse>
<cfquery name="LstBills" ...>
SELECT bill_id,
(Convert(varchar(10),billing_start_date,103) + ' - ' + Convert(varchar(10),billing_end_date,103)) + ' [' + Convert(varchar(6),plan_id) + ']' AS bill_period
FROM ...
</cfquery>
</cfif>
<cfloop query="LstBills">
<cfset returnStruct = StructNew()>
<cfset returnStruct["bill_id"]= bill_id>
<cfset returnStruct["bill_period"] = bill_period>
<cfset ArrayAppend(result,returnStruct) />
</cfloop>
<cfreturn result>
</cffunction> -
Using CLOB datatypes in WHERE clause
Hi All,
I have a table with two columns as CLOB datatype. I'm using Oracle 8i Enterprise Edition. I can do a query, insert, update and even delete the data in the CLOB field using Oracle's SQL Plus.
What I want is to do a search on those fields.. that is include that field in a WHERE clause. I'm using this datatype to store large number of data.
I'd like to see this query working...
SELECT * FROM MyTable WHERE CLOBFLD LIKE 'Something...';
Now this query doesn't work. It returns: 'Inconsistent datatype' near the word CLOBFLD.
Please Help me out.
Regards,
Gopi
nullI presume you want to query based on the contents of the CLOB, right ? If that is true, then you have to create a text index, using Oracle Context and then use "Contains" in the where clause to query. Hope this helps.
-
Plsql datatype -- URGENT HELP..
Is there a plsql datatype i can declare which can hold > 32k bytes value.
please note that data value is not coming from database.
rather i will be getting it from an OUT variable of an external program call from plsql stored procedure.
THANKS.LOB Types
The LOB (large object) datatypes BFILE, BLOB, CLOB, and NCLOB let you store blocks of unstructured data (such as text, graphic images, video clips, and sound waveforms) up to four gigabytes in size. And, they allow efficient, random, piece-wise access to the data.
The LOB types differ from the LONG and LONG RAW types in several ways. For example, LOBs (except NCLOB) can be attributes of an object type, but LONGs cannot. The maximum size of a LOB is four gigabytes, but the maximum size of a LONG is two gigabytes. Also, LOBs support random access to data, but LONGs support only sequential access.
LOB types store lob locators, which point to large objects stored in an external file, in-line (inside the row) or out-of-line (outside the row). Database columns of type BLOB, CLOB, NCLOB, or BFILE store the locators. BLOB, CLOB, and NCLOB data is stored in the database, in or outside the row. BFILE data is stored in operating system files outside the database.
PL/SQL operates on LOBs through the locators. For example, when you select a BLOB column value, only a locator is returned. If you got it during a transaction, the LOB locator includes a transaction ID, so you cannot use it to update that LOB in another transaction. Likewise, you cannot save a LOB locator during one session, then use it in another session.
Starting in Oracle9i, you can also convert CLOBs to CHAR and VARCHAR2 types and vice versa, or BLOBs to RAW and vice versa, which lets you use LOB types in most SQL and PL/SQL statements and functions. To read, write, and do piecewise operations on LOBs, you can use the supplied package DBMS_LOB. For more information, see Oracle9i Application Developer's Guide - Large Objects (LOBs).
BFILE
You use the BFILE datatype to store large binary objects in operating system files outside the database. Every BFILE variable stores a file locator, which points to a large binary file on the server. The locator includes a directory alias, which specifies a full path name (logical path names are not supported).
BFILEs are read-only, so you cannot modify them. The size of a BFILE is system dependent but cannot exceed four gigabytes (2**32 - 1 bytes). Your DBA makes sure that a given BFILE exists and that Oracle has read permissions on it. The underlying operating system maintains file integrity.
BFILEs do not participate in transactions, are not recoverable, and cannot be replicated. The maximum number of open BFILEs is set by the Oracle initialization parameter SESSION_MAX_OPEN_FILES, which is system dependent.
BLOB
You use the BLOB datatype to store large binary objects in the database, in-line or out-of-line. Every BLOB variable stores a locator, which points to a large binary object. The size of a BLOB cannot exceed four gigabytes.
BLOBs participate fully in transactions, are recoverable, and can be replicated. Changes made by package DBMS_LOB can be committed or rolled back. BLOB locators can span transactions (for reads only), but they cannot span sessions.
CLOB
You use the CLOB datatype to store large blocks of character data in the database, in-line or out-of-line. Both fixed-width and variable-width character sets are supported. Every CLOB variable stores a locator, which points to a large block of character data. The size of a CLOB cannot exceed four gigabytes.
CLOBs participate fully in transactions, are recoverable, and can be replicated. Changes made by package DBMS_LOB can be committed or rolled back. CLOB locators can span transactions (for reads only), but they cannot span sessions.
NCLOB
You use the NCLOB datatype to store large blocks of NCHAR data in the database, in-line or out-of-line. Both fixed-width and variable-width character sets are supported. Every NCLOB variable stores a locator, which points to a large block of NCHAR data. The size of an NCLOB cannot exceed four gigabytes.
NCLOBs participate fully in transactions, are recoverable, and can be replicated. Changes made by package DBMS_LOB can be committed or rolled back. NCLOB locators can span transactions (for reads only), but they cannot span sessions. -
Target File Size Too Large?
Hello,
We have an interface, that takes a Source File-> Transforms it in the Staging Area ->and outputs a Target File.
The problem is the target file is way too large than what it is supposed to be.
We do have the 'Truncate Option' turned ON, so its not duplicate records..
We think its the Physical and Logical Lengths that are defined for the Target files.
We think the logical length is way too large causing substantial 'spaces' between the data columns thereby increasing the file size.
We initially had the Logical Length for the data columns as 12 and we got the following error:
Arithmetic Overflow error converting numeric to data type numeric.
When we increased the Logical Length from 12 to 20 the interface executed fine without errors. But now the target file size is just way too large 1:5
Any suggestions to prevent these additional spaces in the target columns??
Appreciate your inputs!
ThanksSince 'File-system' does not have a property 'column length', ODI will automatically set a standard 'column length' according to the datatype of the column. In your case, as both your source and target is File, check the max length of each column in your source ( Ex: if your file is huge then open the file in excel and verify the lengths) and set the same 'Logical length' for your target file datastore.
drop the temporary tables (set the 'delete temp objects' option to 'true' in the KMs) and re-run the Interface. hope this helps.
Thanks,
Parasuram. -
Problems displaying images stored in SQL Server as Datatype "image"
I am trying to display an Image stored in SQL Server as
datatype
'image' and it only shows a portion of the image.
It seems to be tied to the size (kb) of the image since the
larger the
image the less of it is shown before it cuts off(sometimes it
cuts off
mid line so it's about the file size and not fitting the
image on the
screen).
Here is the code I am using that deals with the image.
[Bindable]
public var theImage:ByteArray = new ByteArray;
private function getScans_result(event:ResultEvent):void{
var imageByteArray:ByteArray = event.result[0].Image;
theImage = imageByteArray
<mx:Image id="theIMG" width="160" height="220"
source="{theImage}"/>
Any Thoughts??I do that sort of thing using PHP and mySQL the binary part
of the image is stored in a BLOB field (Binary Large OBject) along
the rest of the information necessary for the http headers (e.g.
mime type, file name, file size) being staored in their own fields.
That info is then used to build the file using PHP. The PHP
file contains a mySQL query whos result is echoed with the
appropriate headers. The URL points at the PHP file rather than at
any saved image.
Here's the php:
<?
# display_imagebank_file.php
# Note: the ID is passed through the url e.g.
# this_files_name.php?id=1
# connect to mysql database
mysql_connect('HOST', 'USERNAME', 'PASSWORD');
mysql_select_db('DATABASE');
# run a query to get the file information
$query = mysql_query("SELECT FileName, MimeType, FileSize,
FileContents FROM blobTable WHERE ID='$id'");
# perform an error check
if(mysql_num_rows($query)==1){
$fileName = mysql_result($query,0,0);
$fileType = mysql_result($query,0,1);
$fileSize = mysql_result($query,0,2);
$fileContents = mysql_result($query,0,3);
header("Content-type: $fileType");
header("Content-length: $fileSize");
header("Content-Disposition: inline; filename=$fileName");
header("Content-Description: from imagebank");
header("Connection: close");
echo $fileContents;
}else{
$numRows = mysql_num_rows($query);
echo "File not found <br>";
echo $numRows;
echo " is the number of rows returned for id = ";
echo $ID;
?>
Hope that helps
Phil -
Database Variant to Data.vi not working for the Date datatype with LV 8.2?
I'm moving a large body of LV database code from LV 7.1 to 8.2 and find that the Database Variant to Data.vi is not working correctly when used with the Date datatype. It works fine with 8.0, and the common Variant to Data works also. Am I missing something? Thanks in advance for any assistance. Wes
Thanks for the prompt reply Crystal,
The data is stored in an Oracle database using the DATE type. I'm querying many rows along with other columns and converting each of the values as necessary for each column with the 'Database Variant to Data' vi. Only conversion to Timestamp is no longer working as of version 8.2. I recognize that plain Variant to Data works but I have many (100's) of VIs to change if that is the only solution (not the end of the world). Most often the dates are originally generated in the database using PL/SQL procedures calling SYSDATE which look like: 5/1/2006 11:56:26 AM (in TOAD anyway) which I then need to read into LV as type Timestamp.
Regards, Wes.
Maybe you are looking for
-
When transferring money form one account to another a message comes up as follows: Internet banking error. You are not authorized to access the required resource. I has worked for years and stopped today. The banking works fine on outlook. Bank sugge
-
Trouble running the new iTunes software on Windows XP
I can't get the latest iTunes software to run on my Windows XP computer
-
Why can't I make my applets work in Explorer?
I know this is stupid but I've really checked almost every source to see what the problem is.. But it doesnt work... Here it is: I write really really simple, "hello world" kinda applets just to see the java environment and stuff. I compile them usin
-
TS1717 I tunes on iPad is frozen. How do I fix it?
The iTunes store so is frozen. Anyone know how to fix it?
-
Installing/deploying CS4 and dynamic Folder location
I've utilsed the admin tool to create the install and uninstall xml files for a CS$ deployment. I found that it static to be installed from a network location, could this be changed to have it pushed as a deployment of MDT or CA DISM? I would like to