Binary to integer?
i know there are methods for this but im trying to figure out a way to do it recursively. i know how to get an integer to binary recursively (split it up into its remainder and it divided by 2), but im having a lot of trouble going the other way. any ideas?
thanks!
Do you know anything about powers of 2?
first time through you get right most and multiply by 2^0
recurse using the binary string - right most char and an iteration value...
Here is a partial start for you:
public int binaryStringToInt(String bs, int i){
//bs is binary string you with to convert
//i is your itration variable
int intValue = 0;
if(bs.length=1){
//your code here
}else{
binaryString(......);
return intValue;
}
Similar Messages
-
Addition/Substraction of Binary. Converting Binary to Integer
Hi,
I wanted to know how you can add or substract from a binary
value. Lets say xyz is binary now..... I want to convert 123 to
binary form and add or substract with xyz. Is this possible?
Finally I should be able to convert this new xyz i.e. after
addition or substraction to a number... please let me know what
should be done....
I have already looked @ ToString, tobase64,CharsetEncode,
BinaryEncode and dont think I would be able to use them before I
know that add/subs thing..
Thanks in advance...Do you know anything about powers of 2?
first time through you get right most and multiply by 2^0
recurse using the binary string - right most char and an iteration value...
Here is a partial start for you:
public int binaryStringToInt(String bs, int i){
//bs is binary string you with to convert
//i is your itration variable
int intValue = 0;
if(bs.length=1){
//your code here
}else{
binaryString(......);
return intValue;
} -
Programming binary data in Java
A use case contains a series of yes/no options. In the web front end, they are in the form of checkboxes. For anyone with CS background, it is easy to think of a series of binary data which is an integer in Java. One advantage of this approach is that no change is needed for the application back-end DB table if those option entities are changed such as adding more options.
The data conversion is a problem with the approach, however, in Java. It is not too bad to convert a series of binary data to an integer:
StringBuilder sb = new StringBuilder();
sb.append(option1 ? "1" : "0");
sb.append(option2 ? "1" : "0");
Integer.valueOf(sb.toString()); I have some problems with a conversion from an integer to a series binary data:
Integer.toBinaryString(anInteger)which doesn't preserve the number of digits if the significant digit is zero.
Any better way?vwuvancouver wrote:
A use case contains a series of yes/no options. In the web front end, they are in the form of checkboxes. For anyone with CS background, it is easy to think of a series of binary data which is an integer in Java. One advantage of this approach is that no change is needed for the application back-end DB table if those option entities are changed such as adding more options.
For anyone with practical experience in databases and servers it is easy to think of that as likely being the wrong solution. ESPECIALLY if you anticipate more options might be needed.
Any better way?Depends on what you are actually doing.
But if and only if you want to manipulate bits then the easiest way is to use the bit operators that java has. And it would not require any conversions at all. I would like to think someone with a formal (education) background in computer science should have some idea of how to do that. -
Converting binary digits to its decimal equivalent.
Is there any methods to do such convertion ??
Have been doing the hard way. :(hi,
look at the
public static int parseInt(String s,
int radix)
throws NumberFormatException
method in the Integer class, there are examples in the api, here is what is shows:
parseInt("0", 10) returns 0
parseInt("473", 10) returns 473
parseInt("-0", 10) returns 0
parseInt("-FF", 16) returns -255
parseInt("1100110", 2) returns 102
parseInt("2147483647", 10) returns 2147483647
parseInt("-2147483648", 10) returns -2147483648
parseInt("2147483648", 10) throws a NumberFormatException
parseInt("99", 8) throws a NumberFormatException
parseInt("Kona", 10) throws a NumberFormatException
parseInt("Kona", 27) returns 411787
there is one showing 1100110, 2 returns 102, binary to integer. -
when converting a integer to binary using Integer.toBinaryString, the leading zeros are removed. Is there a way to keep the leading zeros in the conversion?
any help would be appreciated!
thankswhen converting a integer to binary using
Integer.toBinaryString, the leading zeros are removed.
Is there a way to keep the leading zeros in the
conversion?if by integer you mean int you never had the leading
zeros in the first place ... other that legosa's way, you could be
a bit sneakier, and do:String thirty2 = Integer.toBinaryString(2);
int length = thirty2.length();
String temp = "00000000000000000000000000000000" + thirty2;
thirty2 = temp.substring(length, 32 + length);
System.out.println(thirty2.length() + ": " + thirty2); -
I was wondering if there is a function for turning an integer into a binary number? Thanks ( tried looking it up but all that came up was binary search and some other odd topics...)
meint i = 1023;
// Parse and format to binary
i = Integer.parseInt("1111111111", 2); // 1023
String s = Integer.toString(i, 2); // 1111111111
// Parse and format to octal
i = Integer.parseInt("1777", 8); // 1023
s = Integer.toString(i, 8); // 1777
// Parse and format to decimal
i = Integer.parseInt("1023"); // 1023
s = Integer.toString(i); // 1023
// Parse and format to hexadecimal
i = Integer.parseInt("3ff", 16); // 1023
s = Integer.toString(i, 16); // 3ff
// Parse and format to arbitrary radix <= Character.MAX_RADIX
int radix = 32;
i = Integer.parseInt("vv", radix); // 1023
s = Integer.toString(i, radix); // vv -
Help needed for hash_area_size setting for Datawarehouse environment
We have an Oracle 10g Datawarehousing environment , running on 3 - node RAC
with 16 GB RAM & 4 CPUs each and roughly we have 200 users and night jobs running on this D/W .
We find that query performance of all ETL Processes & joins are quite slow .
How much should we increase the value of hash_area_size parameter for this Datawarehouse environment ? This is a Production database, with Oracle Database 10g Enterprise Edition Release 10.1.0.5.0.
We use OWB 10g Tool for this D/W and we need to change the hash_area_size to increase the performance of the ETL Processes.
This is the Oracle init parameter settings used, as shown below : -
Kindly suggest ,
Thanks & best regards ,
===========================================================
ORBIT
__db_cache_size 1073741824
__java_pool_size 67108864
__large_pool_size 318767104
__shared_pool_size 1744830464
optimizercost_based_transformation OFF
active_instance_count
aq_tm_processes 1
archive_lag_target 0
asm_diskgroups
asm_diskstring
asm_power_limit 1
audit_file_dest /dboracle/orabase/product/10.1.0/rdbms/audit
audit_sys_operations FALSE
audit_trail NONE
background_core_dump partial
background_dump_dest /dborafiles/orbit/ORBIT01/admin/bdump
backup_tape_io_slaves TRUE
bitmap_merge_area_size 1048576
blank_trimming FALSE
buffer_pool_keep
buffer_pool_recycle
circuits
cluster_database TRUE
cluster_database_instances 3
cluster_interconnects
commit_point_strength 1
compatible 10.1.0
control_file_record_keep_time 90
control_files #NAME?
core_dump_dest /dborafiles/orbit/ORBIT01/admin/cdump
cpu_count 4
create_bitmap_area_size 8388608
create_stored_outlines
cursor_sharing EXACT
cursor_space_for_time FALSE
db_16k_cache_size 0
db_2k_cache_size 0
db_32k_cache_size 0
db_4k_cache_size 0
db_8k_cache_size 0
db_block_buffers 0
db_block_checking FALSE
db_block_checksum TRUE
db_block_size 8192
db_cache_advice ON
db_cache_size 1073741824
db_create_file_dest #NAME?
db_create_online_log_dest_1 #NAME?
db_create_online_log_dest_2 #NAME?
db_create_online_log_dest_3
db_create_online_log_dest_4
db_create_online_log_dest_5
db_domain
db_file_multiblock_read_count 64
db_file_name_convert
db_files 999
db_flashback_retention_target 1440
db_keep_cache_size 0
db_name ORBIT
db_recovery_file_dest #NAME?
db_recovery_file_dest_size 2.62144E+11
db_recycle_cache_size 0
db_unique_name ORBIT
db_writer_processes 1
dbwr_io_slaves 0
ddl_wait_for_locks FALSE
dg_broker_config_file1 /dboracle/orabase/product/10.1.0/dbs/dr1ORBIT.dat
dg_broker_config_file2 /dboracle/orabase/product/10.1.0/dbs/dr2ORBIT.dat
dg_broker_start FALSE
disk_asynch_io TRUE
dispatchers
distributed_lock_timeout 60
dml_locks 9700
drs_start FALSE
enqueue_resources 10719
event
fal_client
fal_server
fast_start_io_target 0
fast_start_mttr_target 0
fast_start_parallel_rollback LOW
file_mapping FALSE
fileio_network_adapters
filesystemio_options asynch
fixed_date
gc_files_to_locks
gcs_server_processes 2
global_context_pool_size
global_names FALSE
hash_area_size 131072
hi_shared_memory_address 0
hpux_sched_noage 0
hs_autoregister TRUE
ifile
instance_groups
instance_name ORBIT01
instance_number 1
instance_type RDBMS
java_max_sessionspace_size 0
java_pool_size 67108864
java_soft_sessionspace_limit 0
job_queue_processes 10
large_pool_size 318767104
ldap_directory_access NONE
license_max_sessions 0
license_max_users 0
license_sessions_warning 0
local_listener
lock_name_space
lock_sga FALSE
log_archive_config
log_archive_dest
log_archive_dest_1 LOCATION=+ORBT_A06635_DATA1_ASM/ORBIT/ARCHIVELOG/
log_archive_dest_10
log_archive_dest_2
log_archive_dest_3
log_archive_dest_4
log_archive_dest_5
log_archive_dest_6
log_archive_dest_7
log_archive_dest_8
log_archive_dest_9
log_archive_dest_state_1 enable
log_archive_dest_state_10 enable
log_archive_dest_state_2 enable
log_archive_dest_state_3 enable
log_archive_dest_state_4 enable
log_archive_dest_state_5 enable
log_archive_dest_state_6 enable
log_archive_dest_state_7 enable
log_archive_dest_state_8 enable
log_archive_dest_state_9 enable
log_archive_duplex_dest
log_archive_format %t_%s_%r.arc
log_archive_local_first TRUE
log_archive_max_processes 2
log_archive_min_succeed_dest 1
log_archive_start FALSE
log_archive_trace 0
log_buffer 1167360
log_checkpoint_interval 0
log_checkpoint_timeout 1800
log_checkpoints_to_alert FALSE
log_file_name_convert
logmnr_max_persistent_sessions 1
max_commit_propagation_delay 700
max_dispatchers
max_dump_file_size UNLIMITED
max_enabled_roles 150
max_shared_servers
nls_calendar
nls_comp
nls_currency #
nls_date_format DD-MON-RRRR
nls_date_language ENGLISH
nls_dual_currency ?
nls_iso_currency UNITED KINGDOM
nls_language ENGLISH
nls_length_semantics BYTE
nls_nchar_conv_excp FALSE
nls_numeric_characters
nls_sort
nls_territory UNITED KINGDOM
nls_time_format HH24.MI.SSXFF
nls_time_tz_format HH24.MI.SSXFF TZR
nls_timestamp_format DD-MON-RR HH24.MI.SSXFF
nls_timestamp_tz_format DD-MON-RR HH24.MI.SSXFF TZR
O7_DICTIONARY_ACCESSIBILITY FALSE
object_cache_max_size_percent 10
object_cache_optimal_size 102400
olap_page_pool_size 0
open_cursors 1024
open_links 4
open_links_per_instance 4
optimizer_dynamic_sampling 2
optimizer_features_enable 10.1.0.5
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
os_authent_prefix ops$
os_roles FALSE
parallel_adaptive_multi_user TRUE
parallel_automatic_tuning TRUE
parallel_execution_message_size 4096
parallel_instance_group
parallel_max_servers 80
parallel_min_percent 0
parallel_min_servers 0
parallel_server TRUE
parallel_server_instances 3
parallel_threads_per_cpu 2
pga_aggregate_target 8589934592
plsql_code_type INTERPRETED
plsql_compiler_flags INTERPRETED
plsql_debug FALSE
plsql_native_library_dir
plsql_native_library_subdir_count 0
plsql_optimize_level 2
plsql_v2_compatibility FALSE
plsql_warnings DISABLE:ALL
pre_page_sga FALSE
processes 600
query_rewrite_enabled TRUE
query_rewrite_integrity enforced
rdbms_server_dn
read_only_open_delayed FALSE
recovery_parallelism 0
remote_archive_enable TRUE
remote_dependencies_mode TIMESTAMP
remote_listener
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
replication_dependency_tracking TRUE
resource_limit FALSE
resource_manager_plan
resumable_timeout 0
rollback_segments
serial_reuse disable
service_names ORBIT
session_cached_cursors 0
session_max_open_files 10
sessions 2205
sga_max_size 3221225472
sga_target 3221225472
shadow_core_dump partial
shared_memory_address 0
shared_pool_reserved_size 102760448
shared_pool_size 318767104
shared_server_sessions
shared_servers 0
skip_unusable_indexes TRUE
smtp_out_server
sort_area_retained_size 0
sort_area_size 65536
sp_name ORBIT
spfile #NAME?
sql_trace FALSE
sql_version NATIVE
sql92_security FALSE
sqltune_category DEFAULT
standby_archive_dest ?/dbs/arch
standby_file_management MANUAL
star_transformation_enabled TRUE
statistics_level TYPICAL
streams_pool_size 0
tape_asynch_io TRUE
thread 1
timed_os_statistics 0
timed_statistics TRUE
trace_enabled TRUE
tracefile_identifier
transactions 2425
transactions_per_rollback_segment 5
undo_management AUTO
undo_retention 7200
undo_tablespace UNDOTBS1
use_indirect_data_buffers FALSE
user_dump_dest /dborafiles/orbit/ORBIT01/admin/udump
utl_file_dir /orbit_serial/oracle/utl_out
workarea_size_policy AUTOThe parameters are already unset in the environment, but do show up in v$parameter, much like shared_pool_size is visible in v$parameter despite only sga_target being set.
SQL> show parameter sort
NAME TYPE VALUE
sortelimination_cost_ratio integer 5
nls_sort string binary
sort_area_retained_size integer 0
sort_area_size integer 65536
SQL> show parameter hash
NAME TYPE VALUE
hash_area_size integer 131072
SQL> exit
Only set hash_area_size and sort_area_size should only be set when not using automatic undo, which is not supported in EBS databases.
Database Initialization Parameters for Oracle Applications 11i
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216205.1 -
Trying to get multiple cell values within a geometry
I am provided with 3 tables:
1 - The GeoRaster
2 - The geoRasterData table
3 - A VAT table who's PK is the cell value from the above tables
Currently the user can select a point in our application and by using the getCellValue we get the cell value which is the PK on the 3rd table and this gives us the details to return to the user.
We now want to give the worst scenario within a given geometry or distance. So if I get back all the cell values within a given geometry/distance I can then call my other functions against the 3rd table to get the worst scores.
I had a conversation open for this before where JeffreyXie had some brilliant input, but it got archived while I was waiting on Oracle to resolve a bug (about 7 months)
See:
Trying to get multiple cell values within a geometry
If I am looking to get a list of cell values that interact with my geometry/distance and then loop through them, is there a better way?
BTW, if anybody wants to play with this functionality, it only seems to work in 11.2.0.4.
Below is the code I was using last, I think it is trying to get the cell values but the numbers coming back are not correct, I think I am converting the binary to integer wrong.
Any ideas?
CREATE OR REPLACE FUNCTION GEOSUK.getCellValuesInGeom_FNC RETURN VARCHAR2 AS
gr sdo_georaster;
lb blob;
win1 sdo_geometry;
win2 sdo_number_array;
status VARCHAR2(1000) := NULL;
CDP varchar2(80);
FLT number := 0;
cdl number;
vals varchar2(32000) := null;
VAL number;
amt0 integer;
amt integer;
off integer;
len integer;
buf raw(32767);
MAXV number := null;
r1 raw(1);
r2 raw(2);
r4 raw(200);
r8 raw(8);
MATCH varchar2(10) := '';
ROW_COUNT integer := 0;
COL_COUNT integer := 0;
ROW_CUR integer := 0;
COL_CUR integer := 0;
CUR_XOFFSET integer := 0;
CUR_YOFFSET integer := 0;
ORIGINY integer := 0;
ORIGINX integer := 0;
XOFF number(38,0) := 0;
YOFF number(38,0) := 0;
BEGIN
status := '1';
SELECT a.georaster INTO gr FROM JBA_MEGARASTER_1012 a WHERE id=1;
-- first figure out the celldepth from the metadata
cdp := gr.metadata.extract('/georasterMetadata/rasterInfo/cellDepth/text()',
'xmlns=http://xmlns.oracle.com/spatial/georaster').getStringVal();
if cdp = '32BIT_REAL' then
flt := 1;
end if;
cdl := sdo_geor.getCellDepth(gr);
if cdl < 8 then
-- if celldepth<8bit, get the cell values as 8bit integers
cdl := 8;
end if;
dbms_lob.createTemporary(lb, TRUE);
status := '2';
-- querying/clipping polygon
win1 := SDO_GEOM.SDO_BUFFER(SDO_GEOMETRY(2001,27700,MDSYS.SDO_POINT_TYPE(473517,173650.3, NULL),NULL,NULL), 10, .005);
status := '1.2';
sdo_geor.getRasterSubset(gr, 0, win1, '1',
lb, win2, NULL, NULL, 'TRUE');
-- Then work on the resulting subset stored in lb.
status := '2.3';
DBMS_OUTPUT.PUT_LINE ( 'cdl: '||cdl );
len := dbms_lob.getlength(lb);
cdl := cdl / 8;
-- make sure to read all the bytes of a cell value at one run
amt := floor(32767 / cdl) * cdl;
amt0 := amt;
status := '3';
ROW_COUNT := (WIN2(3) - WIN2(1))+1;
COL_COUNT := (WIN2(4) - WIN2(2))+1;
--NEED TO FETCH FROM RASTER
ORIGINY := 979405;
ORIGINX := 91685;
--CALCUALATE BLOB AREA
YOFF := ORIGINY - (WIN2(1) * 5); --177005;
XOFF := ORIGINX + (WIN2(2) * 5); --530505;
status := '4';
--LOOP CELLS
off := 1;
WHILE off <= LEN LOOP
dbms_lob.read(lb, amt, off, buf);
for I in 1..AMT/CDL LOOP
if cdl = 1 then
r1 := utl_raw.substr(buf, (i-1)*cdl+1, cdl);
VAL := UTL_RAW.CAST_TO_BINARY_INTEGER(R1);
elsif cdl = 2 then
r2 := utl_raw.substr(buf, (i-1)*cdl+1, cdl);
val := utl_raw.cast_to_binary_integer(r2);
ELSIF CDL = 4 then
IF (((i-1)*cdl+1) + cdl) > len THEN
r4 := utl_raw.substr(buf, (i-1)*cdl+1, (len - ((i-1)*cdl+1)));
ELSE
r4 := utl_raw.substr(buf, (i-1)*cdl+1, cdl+1);
END IF;
if flt = 0 then
val := utl_raw.cast_to_binary_integer(r4);
else
val := utl_raw.cast_to_binary_float(r4);
end if;
elsif cdl = 8 then
r8 := utl_raw.substr(buf, (i-1)*cdl+1, cdl);
val := utl_raw.cast_to_binary_double(r8);
end if;
if MAXV is null or MAXV < VAL then
MAXV := VAL;
end if;
IF i = 1 THEN
VALS := VALS || VAL;
ELSE
VALS := VALS ||'|'|| VAL;
END IF;
end loop;
off := off+amt;
amt := amt0;
end loop;
dbms_lob.freeTemporary(lb);
status := '5';
RETURN VALS;
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20001, 'GENERAL ERROR IN MY PROC, Status: '||status||', SQL ERROR: '||SQLERRM);
END;Hey guys,
Zzhang,
That's a good spot and as it happens I spotted that and that is why I am sure I am querying that lob wrong. I always get the a logic going past the total length of the lob.
I think I am ok using 11.2.0.4, if I can get this working it is really important to us, so saying to roll up to 11.2.0.4 for this would be no problem.
The error in 11.2.0.3 was an internal error: [kghstack_underflow_internal_3].
Something that I think I need to find out more about, but am struggling to get more information on is, I am assuming that the lob that is returned is all cell values or at lest an array of 4 byte (32 bit) chunks, although, I don't know this.
Is that a correct assumption or is there more to it?
Have either of you seen any documentation on how to query this lob?
Thanks -
Error in .dbf to table loading
Hi all ,
I am loading a .dbf file into a table.I took code from google for it.But I dnt know why its not working when I am running code.
error coming as below..can anyone help me ..
BEGIN
EXPRESS.DBF2ORA.LOAD_TABLE ( ‘DATA_201′, ‘SAL_HEAD.DBF’, ‘dbf_tab_pc’, NULL, false );
END;
error is ....
ORA-22285: non-existent directory or file for FILEOPEN operation
ORA-06512: at “EXPRESS.DBF2ORA”, line 414
ORA-06512: at line 2
package code is ...
package body dbase_pkg
as
-- Might have to change on your platform!!!
-- Controls the byte order of binary integers read in
-- from the dbase file
BIG_ENDIAN constant boolean default TRUE;
type dbf_header is RECORD
version varchar2(25), -- dBASE version number
year int, -- 1 byte int year, add to 1900
month int, -- 1 byte month
day int, -- 1 byte day
no_records int, -- number of records in file,
-- 4 byte int
hdr_len int, -- length of header, 2 byte int
rec_len int, -- number of bytes in record,
-- 2 byte int
no_fields int -- number of fields
type field_descriptor is RECORD
name varchar2(11),
type char(1),
length int, -- 1 byte length
decimals int -- 1 byte scale
type field_descriptor_array
is table of
field_descriptor index by binary_integer;
type rowArray
is table of
varchar2(4000) index by binary_integer;
g_cursor binary_integer default dbms_sql.open_cursor;
-- Function to convert a binary unsigned integer
-- into a PLSQL number
function to_int( p_data in varchar2 ) return number
is
l_number number default 0;
l_bytes number default length(p_data);
begin
if (big_endian)
then
for i in 1 .. l_bytes loop
l_number := l_number +
ascii(substr(p_data,i,1)) *
power(2,8*(i-1));
end loop;
else
for i in 1 .. l_bytes loop
l_number := l_number +
ascii(substr(p_data,l_bytes-i+1,1)) *
power(2,8*(i-1));
end loop;
end if;
return l_number;
end;
-- Routine to parse the DBASE header record, can get
-- all of the details of the contents of a dbase file from
-- this header
procedure get_header
(p_bfile in bfile,
p_bfile_offset in out NUMBER,
p_hdr in out dbf_header,
p_flds in out field_descriptor_array )
is
l_data varchar2(100);
l_hdr_size number default 32;
l_field_desc_size number default 32;
l_flds field_descriptor_array;
begin
p_flds := l_flds;
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
l_hdr_size,
p_bfile_offset ) );
p_bfile_offset := p_bfile_offset + l_hdr_size;
p_hdr.version := ascii( substr( l_data, 1, 1 ) );
p_hdr.year := 1900 + ascii( substr( l_data, 2, 1 ) );
p_hdr.month := ascii( substr( l_data, 3, 1 ) );
p_hdr.day := ascii( substr( l_data, 4, 1 ) );
p_hdr.no_records := to_int( substr( l_data, 5, 4 ) );
p_hdr.hdr_len := to_int( substr( l_data, 9, 2 ) );
p_hdr.rec_len := to_int( substr( l_data, 11, 2 ) );
p_hdr.no_fields := trunc( (p_hdr.hdr_len - l_hdr_size)/
l_field_desc_size );
for i in 1 .. p_hdr.no_fields
loop
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
l_field_desc_size,
p_bfile_offset ));
p_bfile_offset := p_bfile_offset + l_field_desc_size;
p_flds(i).name := rtrim(substr(l_data,1,11),chr(0));
p_flds(i).type := substr( l_data, 12, 1 );
p_flds(i).length := ascii( substr( l_data, 17, 1 ) );
p_flds(i).decimals := ascii(substr(l_data,18,1) );
end loop;
p_bfile_offset := p_bfile_offset +
mod( p_hdr.hdr_len - l_hdr_size,
l_field_desc_size );
end;
function build_insert
( p_tname in varchar2,
p_cnames in varchar2,
p_flds in field_descriptor_array ) return varchar2
is
l_insert_statement long;
begin
l_insert_statement := 'insert into ' || p_tname || '(';
if ( p_cnames is NOT NULL )
then
l_insert_statement := l_insert_statement ||
p_cnames || ') values (';
else
for i in 1 .. p_flds.count
loop
if ( i <> 1 )
then
l_insert_statement := l_insert_statement||',';
end if;
l_insert_statement := l_insert_statement ||
'"'|| p_flds(i).name || '"';
end loop;
l_insert_statement := l_insert_statement ||
') values (';
end if;
for i in 1 .. p_flds.count
loop
if ( i <> 1 )
then
l_insert_statement := l_insert_statement || ',';
end if;
if ( p_flds(i).type = 'D' )
then
l_insert_statement := l_insert_statement ||
'to_date(:bv' || i || ',''yyyymmdd'' )';
else
l_insert_statement := l_insert_statement ||
':bv' || i;
end if;
end loop;
l_insert_statement := l_insert_statement || ')';
return l_insert_statement;
end;
function get_row
( p_bfile in bfile,
p_bfile_offset in out number,
p_hdr in dbf_header,
p_flds in field_descriptor_array ) return rowArray
is
l_data varchar2(4000);
l_row rowArray;
l_n number default 2;
begin
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
p_hdr.rec_len,
p_bfile_offset ) );
p_bfile_offset := p_bfile_offset + p_hdr.rec_len;
l_row(0) := substr( l_data, 1, 1 );
for i in 1 .. p_hdr.no_fields loop
l_row(i) := rtrim(ltrim(substr( l_data,
l_n,
p_flds(i).length ) ));
if ( p_flds(i).type = 'F' and l_row(i) = '.' )
then
l_row(i) := NULL;
end if;
l_n := l_n + p_flds(i).length;
end loop;
return l_row;
end get_row;
procedure show( p_hdr in dbf_header,
p_flds in field_descriptor_array,
p_tname in varchar2,
p_cnames in varchar2,
p_bfile in bfile )
is
l_sep varchar2(1) default ',';
procedure p(p_str in varchar2)
is
l_str long default p_str;
begin
while( l_str is not null )
loop
dbms_output.put_line( substr(l_str,1,250) );
l_str := substr( l_str, 251 );
end loop;
end;
begin
p( 'Sizeof DBASE File: ' || dbms_lob.getlength(p_bfile) );
p( 'DBASE Header Information: ' );
p( chr(9)||'Version = ' || p_hdr.version );
p( chr(9)||'Year = ' || p_hdr.year );
p( chr(9)||'Month = ' || p_hdr.month );
p( chr(9)||'Day = ' || p_hdr.day );
p( chr(9)||'#Recs = ' || p_hdr.no_records);
p( chr(9)||'Hdr Len = ' || p_hdr.hdr_len );
p( chr(9)||'Rec Len = ' || p_hdr.rec_len );
p( chr(9)||'#Fields = ' || p_hdr.no_fields );
p( chr(10)||'Data Fields:' );
for i in 1 .. p_hdr.no_fields
loop
p( 'Field(' || i || ') '
|| 'Name = "' || p_flds(i).name || '", '
|| 'Type = ' || p_flds(i).Type || ', '
|| 'Len = ' || p_flds(i).length || ', '
|| 'Scale= ' || p_flds(i).decimals );
end loop;
p( chr(10) || 'Insert We would use:' );
p( build_insert( p_tname, p_cnames, p_flds ) );
p( chr(10) || 'Table that could be created to hold data:');
p( 'create table ' || p_tname );
p( '(' );
for i in 1 .. p_hdr.no_fields
loop
if ( i = p_hdr.no_fields ) then l_sep := ')'; end if;
dbms_output.put
( chr(9) || '"' || p_flds(i).name || '" ');
if ( p_flds(i).type = 'D' ) then
p( 'date' || l_sep );
elsif ( p_flds(i).type = 'F' ) then
p( 'float' || l_sep );
elsif ( p_flds(i).type = 'N' ) then
if ( p_flds(i).decimals > 0 )
then
p( 'number('||p_flds(i).length||','||
p_flds(i).decimals || ')' ||
l_sep );
else
p( 'number('||p_flds(i).length||')'||l_sep );
end if;
else
p( 'varchar2(' || p_flds(i).length || ')'||l_sep);
end if;
end loop;
p( '/' );
end;
procedure load_Table( p_dir in varchar2,
p_file in varchar2,
p_tname in varchar2,
p_cnames in varchar2 default NULL,
p_show in boolean default FALSE )
is
l_bfile bfile;
l_offset number default 1;
l_hdr dbf_header;
l_flds field_descriptor_array;
l_row rowArray;
begin
l_bfile := bfilename( p_dir, p_file );
dbms_lob.fileopen( l_bfile );
get_header( l_bfile, l_offset, l_hdr, l_flds );
if ( p_show )
then
show( l_hdr, l_flds, p_tname, p_cnames, l_bfile );
else
dbms_sql.parse( g_cursor,
build_insert(p_tname,p_cnames,l_flds),
dbms_sql.native );
for i in 1 .. l_hdr.no_records loop
l_row := get_row( l_bfile,
l_offset,
l_hdr,
l_flds );
if ( l_row(0) <> '*' ) -- deleted record
then
for i in 1..l_hdr.no_fields loop
dbms_sql.bind_variable( g_cursor,
':bv'||i,
l_row(i),
4000 );
end loop;
if ( dbms_sql.execute( g_cursor ) <> 1 )
then
raise_application_error( -20001,
'Insert failed ' || sqlerrm );
end if;
end if;
end loop;
end if;
dbms_lob.fileclose( l_bfile );
exception
when others then
if ( dbms_lob.isopen( l_bfile ) > 0 ) then
dbms_lob.fileclose( l_bfile );
end if;
RAISE;
end;
end;
i think i am doing mistake in creating and using directory ..direct me plz..Hi all ,
I am loading a .dbf file into a table.I took code from google for it.But I dnt know why its not working when I am running code.
error coming as below..can anyone help me ..
BEGIN
EXPRESS.DBF2ORA.LOAD_TABLE ( ‘DATA_201′, ‘SAL_HEAD.DBF’, ‘dbf_tab_pc’, NULL, false );
END;
error is ....
ORA-22285: non-existent directory or file for FILEOPEN operation
ORA-06512: at “EXPRESS.DBF2ORA”, line 414
ORA-06512: at line 2
package code is ...
package body dbase_pkg
as
-- Might have to change on your platform!!!
-- Controls the byte order of binary integers read in
-- from the dbase file
BIG_ENDIAN constant boolean default TRUE;
type dbf_header is RECORD
version varchar2(25), -- dBASE version number
year int, -- 1 byte int year, add to 1900
month int, -- 1 byte month
day int, -- 1 byte day
no_records int, -- number of records in file,
-- 4 byte int
hdr_len int, -- length of header, 2 byte int
rec_len int, -- number of bytes in record,
-- 2 byte int
no_fields int -- number of fields
type field_descriptor is RECORD
name varchar2(11),
type char(1),
length int, -- 1 byte length
decimals int -- 1 byte scale
type field_descriptor_array
is table of
field_descriptor index by binary_integer;
type rowArray
is table of
varchar2(4000) index by binary_integer;
g_cursor binary_integer default dbms_sql.open_cursor;
-- Function to convert a binary unsigned integer
-- into a PLSQL number
function to_int( p_data in varchar2 ) return number
is
l_number number default 0;
l_bytes number default length(p_data);
begin
if (big_endian)
then
for i in 1 .. l_bytes loop
l_number := l_number +
ascii(substr(p_data,i,1)) *
power(2,8*(i-1));
end loop;
else
for i in 1 .. l_bytes loop
l_number := l_number +
ascii(substr(p_data,l_bytes-i+1,1)) *
power(2,8*(i-1));
end loop;
end if;
return l_number;
end;
-- Routine to parse the DBASE header record, can get
-- all of the details of the contents of a dbase file from
-- this header
procedure get_header
(p_bfile in bfile,
p_bfile_offset in out NUMBER,
p_hdr in out dbf_header,
p_flds in out field_descriptor_array )
is
l_data varchar2(100);
l_hdr_size number default 32;
l_field_desc_size number default 32;
l_flds field_descriptor_array;
begin
p_flds := l_flds;
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
l_hdr_size,
p_bfile_offset ) );
p_bfile_offset := p_bfile_offset + l_hdr_size;
p_hdr.version := ascii( substr( l_data, 1, 1 ) );
p_hdr.year := 1900 + ascii( substr( l_data, 2, 1 ) );
p_hdr.month := ascii( substr( l_data, 3, 1 ) );
p_hdr.day := ascii( substr( l_data, 4, 1 ) );
p_hdr.no_records := to_int( substr( l_data, 5, 4 ) );
p_hdr.hdr_len := to_int( substr( l_data, 9, 2 ) );
p_hdr.rec_len := to_int( substr( l_data, 11, 2 ) );
p_hdr.no_fields := trunc( (p_hdr.hdr_len - l_hdr_size)/
l_field_desc_size );
for i in 1 .. p_hdr.no_fields
loop
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
l_field_desc_size,
p_bfile_offset ));
p_bfile_offset := p_bfile_offset + l_field_desc_size;
p_flds(i).name := rtrim(substr(l_data,1,11),chr(0));
p_flds(i).type := substr( l_data, 12, 1 );
p_flds(i).length := ascii( substr( l_data, 17, 1 ) );
p_flds(i).decimals := ascii(substr(l_data,18,1) );
end loop;
p_bfile_offset := p_bfile_offset +
mod( p_hdr.hdr_len - l_hdr_size,
l_field_desc_size );
end;
function build_insert
( p_tname in varchar2,
p_cnames in varchar2,
p_flds in field_descriptor_array ) return varchar2
is
l_insert_statement long;
begin
l_insert_statement := 'insert into ' || p_tname || '(';
if ( p_cnames is NOT NULL )
then
l_insert_statement := l_insert_statement ||
p_cnames || ') values (';
else
for i in 1 .. p_flds.count
loop
if ( i <> 1 )
then
l_insert_statement := l_insert_statement||',';
end if;
l_insert_statement := l_insert_statement ||
'"'|| p_flds(i).name || '"';
end loop;
l_insert_statement := l_insert_statement ||
') values (';
end if;
for i in 1 .. p_flds.count
loop
if ( i <> 1 )
then
l_insert_statement := l_insert_statement || ',';
end if;
if ( p_flds(i).type = 'D' )
then
l_insert_statement := l_insert_statement ||
'to_date(:bv' || i || ',''yyyymmdd'' )';
else
l_insert_statement := l_insert_statement ||
':bv' || i;
end if;
end loop;
l_insert_statement := l_insert_statement || ')';
return l_insert_statement;
end;
function get_row
( p_bfile in bfile,
p_bfile_offset in out number,
p_hdr in dbf_header,
p_flds in field_descriptor_array ) return rowArray
is
l_data varchar2(4000);
l_row rowArray;
l_n number default 2;
begin
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
p_hdr.rec_len,
p_bfile_offset ) );
p_bfile_offset := p_bfile_offset + p_hdr.rec_len;
l_row(0) := substr( l_data, 1, 1 );
for i in 1 .. p_hdr.no_fields loop
l_row(i) := rtrim(ltrim(substr( l_data,
l_n,
p_flds(i).length ) ));
if ( p_flds(i).type = 'F' and l_row(i) = '.' )
then
l_row(i) := NULL;
end if;
l_n := l_n + p_flds(i).length;
end loop;
return l_row;
end get_row;
procedure show( p_hdr in dbf_header,
p_flds in field_descriptor_array,
p_tname in varchar2,
p_cnames in varchar2,
p_bfile in bfile )
is
l_sep varchar2(1) default ',';
procedure p(p_str in varchar2)
is
l_str long default p_str;
begin
while( l_str is not null )
loop
dbms_output.put_line( substr(l_str,1,250) );
l_str := substr( l_str, 251 );
end loop;
end;
begin
p( 'Sizeof DBASE File: ' || dbms_lob.getlength(p_bfile) );
p( 'DBASE Header Information: ' );
p( chr(9)||'Version = ' || p_hdr.version );
p( chr(9)||'Year = ' || p_hdr.year );
p( chr(9)||'Month = ' || p_hdr.month );
p( chr(9)||'Day = ' || p_hdr.day );
p( chr(9)||'#Recs = ' || p_hdr.no_records);
p( chr(9)||'Hdr Len = ' || p_hdr.hdr_len );
p( chr(9)||'Rec Len = ' || p_hdr.rec_len );
p( chr(9)||'#Fields = ' || p_hdr.no_fields );
p( chr(10)||'Data Fields:' );
for i in 1 .. p_hdr.no_fields
loop
p( 'Field(' || i || ') '
|| 'Name = "' || p_flds(i).name || '", '
|| 'Type = ' || p_flds(i).Type || ', '
|| 'Len = ' || p_flds(i).length || ', '
|| 'Scale= ' || p_flds(i).decimals );
end loop;
p( chr(10) || 'Insert We would use:' );
p( build_insert( p_tname, p_cnames, p_flds ) );
p( chr(10) || 'Table that could be created to hold data:');
p( 'create table ' || p_tname );
p( '(' );
for i in 1 .. p_hdr.no_fields
loop
if ( i = p_hdr.no_fields ) then l_sep := ')'; end if;
dbms_output.put
( chr(9) || '"' || p_flds(i).name || '" ');
if ( p_flds(i).type = 'D' ) then
p( 'date' || l_sep );
elsif ( p_flds(i).type = 'F' ) then
p( 'float' || l_sep );
elsif ( p_flds(i).type = 'N' ) then
if ( p_flds(i).decimals > 0 )
then
p( 'number('||p_flds(i).length||','||
p_flds(i).decimals || ')' ||
l_sep );
else
p( 'number('||p_flds(i).length||')'||l_sep );
end if;
else
p( 'varchar2(' || p_flds(i).length || ')'||l_sep);
end if;
end loop;
p( '/' );
end;
procedure load_Table( p_dir in varchar2,
p_file in varchar2,
p_tname in varchar2,
p_cnames in varchar2 default NULL,
p_show in boolean default FALSE )
is
l_bfile bfile;
l_offset number default 1;
l_hdr dbf_header;
l_flds field_descriptor_array;
l_row rowArray;
begin
l_bfile := bfilename( p_dir, p_file );
dbms_lob.fileopen( l_bfile );
get_header( l_bfile, l_offset, l_hdr, l_flds );
if ( p_show )
then
show( l_hdr, l_flds, p_tname, p_cnames, l_bfile );
else
dbms_sql.parse( g_cursor,
build_insert(p_tname,p_cnames,l_flds),
dbms_sql.native );
for i in 1 .. l_hdr.no_records loop
l_row := get_row( l_bfile,
l_offset,
l_hdr,
l_flds );
if ( l_row(0) <> '*' ) -- deleted record
then
for i in 1..l_hdr.no_fields loop
dbms_sql.bind_variable( g_cursor,
':bv'||i,
l_row(i),
4000 );
end loop;
if ( dbms_sql.execute( g_cursor ) <> 1 )
then
raise_application_error( -20001,
'Insert failed ' || sqlerrm );
end if;
end if;
end loop;
end if;
dbms_lob.fileclose( l_bfile );
exception
when others then
if ( dbms_lob.isopen( l_bfile ) > 0 ) then
dbms_lob.fileclose( l_bfile );
end if;
RAISE;
end;
end;
i think i am doing mistake in creating and using directory ..direct me plz.. -
Hi all,
I have used TOM's query for my DBF downloading
create or replace
package body dbase_pkg
as
-- Might have to change depending on platform!!!
-- Controls the byte order of binary integers read in
-- from the dbase file
BIG_ENDIAN constant boolean default TRUE;
type dbf_header is RECORD
version varchar2(25), -- dBASE version number
year int, -- 1 byte int year, add to 1900
month int, -- 1 byte month
day int, -- 1 byte day
no_records VARCHAR2(8), -- number of records in file,
-- 4 byte int
hdr_len VARCHAR2(4), -- length of header, 2 byte int
rec_len VARCHAR2(4), -- number of bytes in record,
-- 2 byte int
no_fields int -- number of fields
type field_descriptor is RECORD
name varchar2(11),
fname varchar2(30),
type char(1),
length int, -- 1 byte length
decimals int -- 1 byte scale
type field_descriptor_array
is table of
field_descriptor index by binary_integer;
type rowArray
is table of
varchar2(4000) index by binary_integer;
g_cursor binary_integer default dbms_sql.open_cursor;
function ite( tf in boolean, yes in varchar2, no in varchar2 )
return varchar2
is
begin
if ( tf ) then
return yes;
else
return no;
end if;
end ite;
-- Function to convert a binary unsigned integer
-- into a PLSQL number
function to_int( p_data in varchar2 ) return number
is
l_number number default 0;
l_bytes number default length(p_data);
begin
if (big_endian)
then
for i in 1 .. l_bytes loop
l_number := l_number +
ascii(substr(p_data,i,1)) *
power(2,8*(i-1));
end loop;
else
for i in 1 .. l_bytes loop
l_number := l_number +
ascii(substr(p_data,l_bytes-i+1,1)) *
power(2,8*(i-1));
end loop;
end if;
return l_number;
end;
-- Routine to parse the DBASE header record, can get
-- all of the details of the contents of a dbase file from
-- this header
procedure get_header
(p_bfile in bfile,
p_bfile_offset in out NUMBER,
p_hdr in out dbf_header,
p_flds in out field_descriptor_array )
is
l_data varchar2(100);
l_hdr_size number default 32;
l_field_desc_size number default 32;
l_flds field_descriptor_array;
begin
p_flds := l_flds;
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
l_hdr_size,
p_bfile_offset ) );
p_bfile_offset := p_bfile_offset + l_hdr_size;
p_hdr.version := ascii( substr( l_data, 1, 1 ) );
p_hdr.year := 1900 + ascii( substr( l_data, 2, 1 ) );
p_hdr.month := ascii( substr( l_data, 3, 1 ) );
p_hdr.day := ascii( substr( l_data, 4, 1 ) );
p_hdr.no_records := to_int( substr( l_data, 5, 4 ) );
p_hdr.hdr_len := to_int( substr( l_data, 9, 2 ) );
p_hdr.rec_len := to_int( substr( l_data, 11, 2 ) );
p_hdr.no_fields := trunc( (p_hdr.hdr_len - l_hdr_size)/
l_field_desc_size );
for i in 1 .. p_hdr.no_fields
loop
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
l_field_desc_size,
p_bfile_offset ));
p_bfile_offset := p_bfile_offset + l_field_desc_size;
p_flds(i).name := rtrim(substr(l_data,1,11),chr(0));
p_flds(i).type := substr( l_data, 12, 1 );
p_flds(i).length := ascii( substr( l_data, 17, 1 ) );
p_flds(i).decimals := ascii(substr(l_data,18,1) );
end loop;
p_bfile_offset := p_bfile_offset +
mod( p_hdr.hdr_len - l_hdr_size,
l_field_desc_size );
end;
function build_insert
( p_tname in varchar2,
p_cnames in varchar2,
p_flds in field_descriptor_array ) return varchar2
is
l_insert_statement long;
begin
l_insert_statement := 'insert into ' || p_tname || '(';
if ( p_cnames is NOT NULL )
then
l_insert_statement := l_insert_statement ||
p_cnames || ') values (';
else
for i in 1 .. p_flds.count
loop
if ( i <> 1 )
then
l_insert_statement := l_insert_statement||',';
end if;
l_insert_statement := l_insert_statement ||
'"'|| p_flds(i).name || '"';
end loop;
l_insert_statement := l_insert_statement ||
') values (';
end if;
for i in 1 .. p_flds.count
loop
if ( i <> 1 )
then
l_insert_statement := l_insert_statement || ',';
end if;
if ( p_flds(i).type = 'D' )
then
l_insert_statement := l_insert_statement ||
'to_date(:bv' || i || ',''yyyymmdd'' )';
else
l_insert_statement := l_insert_statement ||
':bv' || i;
end if;
end loop;
l_insert_statement := l_insert_statement || ')';
return l_insert_statement;
end;
function get_row
( p_bfile in bfile,
p_bfile_offset in out number,
p_hdr in dbf_header,
p_flds in field_descriptor_array ) return rowArray
is
l_data varchar2(4000);
l_row rowArray;
l_n number default 2;
begin
l_data := utl_raw.cast_to_varchar2(
dbms_lob.substr( p_bfile,
p_hdr.rec_len,
p_bfile_offset ) );
p_bfile_offset := p_bfile_offset + p_hdr.rec_len;
l_row(0) := substr( l_data, 1, 1 );
for i in 1 .. p_hdr.no_fields loop
l_row(i) := rtrim(ltrim(substr( l_data,
l_n,
p_flds(i).length ) ));
if ( p_flds(i).type = 'F' and l_row(i) = '.' )
then
l_row(i) := NULL;
end if;
l_n := l_n + p_flds(i).length;
end loop;
return l_row;
end get_row;
procedure show( p_hdr in dbf_header,
p_flds in field_descriptor_array,
p_tname in varchar2,
p_cnames in varchar2,
p_bfile in bfile )
is
l_sep varchar2(1) default ',';
procedure p(p_str in varchar2)
is
l_str long default p_str;
begin
while( l_str is not null )
loop
dbms_output.put_line( substr(l_str,1,250) );
l_str := substr( l_str, 251 );
end loop;
end;
begin
p( 'Sizeof DBASE File: ' || dbms_lob.getlength(p_bfile) );
p( 'DBASE Header Information: ' );
p( chr(9)||'Version = ' || p_hdr.version );
p( chr(9)||'Year = ' || p_hdr.year );
p( chr(9)||'Month = ' || p_hdr.month );
p( chr(9)||'Day = ' || p_hdr.day );
p( chr(9)||'#Recs = ' || p_hdr.no_records);
p( chr(9)||'Hdr Len = ' || p_hdr.hdr_len );
p( chr(9)||'Rec Len = ' || p_hdr.rec_len );
p( chr(9)||'#Fields = ' || p_hdr.no_fields );
p( chr(10)||'Data Fields:' );
for i in 1 .. p_hdr.no_fields
loop
p( 'Field(' || i || ') '
|| 'Name = "' || p_flds(i).name || '", '
|| 'Type = ' || p_flds(i).Type || ', '
|| 'Len = ' || p_flds(i).length || ', '
|| 'Scale= ' || p_flds(i).decimals );
end loop;
p( chr(10) || 'Insert We would use:' );
p( build_insert( p_tname, p_cnames, p_flds ) );
p( chr(10) || 'Table that could be created to hold data:');
p( 'create table ' || p_tname );
p( '(' );
for i in 1 .. p_hdr.no_fields
loop
if ( i = p_hdr.no_fields ) then l_sep := ')'; end if;
dbms_output.put
( chr(9) || '"' || p_flds(i).name || '" ');
if ( p_flds(i).type = 'D' ) then
p( 'date' || l_sep );
elsif ( p_flds(i).type = 'F' ) then
p( 'float' || l_sep );
elsif ( p_flds(i).type = 'N' ) then
if ( p_flds(i).decimals > 0 )
then
p( 'number('||p_flds(i).length||','||
p_flds(i).decimals || ')' ||
l_sep );
else
p( 'number('||p_flds(i).length||')'||l_sep );
end if;
else
p( 'varchar2(' || p_flds(i).length || ')'||l_sep);
end if;
end loop;
p( '/' );
end;
procedure load_Table( p_dir in varchar2,
p_file in varchar2,
p_tname in varchar2,
p_cnames in varchar2 default NULL,
p_show in boolean default FALSE )
is
l_bfile bfile;
l_offset number default 1;
l_hdr dbf_header;
l_flds field_descriptor_array;
l_row rowArray;
begin
l_bfile := bfilename( p_dir, p_file );
dbms_lob.fileopen( l_bfile );
get_header( l_bfile, l_offset, l_hdr, l_flds );
if ( p_show )
then
show( l_hdr, l_flds, p_tname, p_cnames, l_bfile );
else
dbms_sql.parse( g_cursor,
build_insert(p_tname,p_cnames,l_flds),
dbms_sql.native );
for i in 1 .. l_hdr.no_records loop
l_row := get_row( l_bfile,
l_offset,
l_hdr,
l_flds );
if ( l_row(0) <> '*' ) -- deleted record
then
for i in 1..l_hdr.no_fields loop
dbms_sql.bind_variable( g_cursor,
':bv'||i,
l_row(i),
4000 );
end loop;
if ( dbms_sql.execute( g_cursor ) <> 1 )
then
raise_application_error( -20001,
'Insert failed ' || sqlerrm );
end if;
end if;
end loop;
end if;
dbms_lob.fileclose( l_bfile );
exception
when others then
if ( dbms_lob.isopen( l_bfile ) > 0 ) then
dbms_lob.fileclose( l_bfile );
end if;
RAISE;
end;
procedure put_header (p_tname in varchar2,
p_cnames in varchar2 DEFAULT NULL,
l_hdr in out dbf_header,
vFlds in out field_descriptor_array)
is
v_value_list strTableType;
vCursor varchar2(2000);
type rc IS ref cursor;
col_cur rc;
i INTEGER:=0;
begin
IF p_cnames IS NOT NULL THEN
--select str2tbl(UPPER(p_cnames))
--into v_value_list from dual;
vCursor:='select substr(column_name,1,11),
case data_type
when ''DATE'' then ''D''
when ''NUMBER'' then ''N''
else ''C'' end ,
case data_type
when ''NUMBER'' then NVL(data_precision,22)
when ''DATE'' then 8
else data_length end,
case data_type
when ''NUMBER'' then data_scale
end ,
column_name from all_tab_cols
where column_name IN (select * from TABLE (cast(str2tbl(UPPER('''||p_cnames||'''))
as strTableType)))
and table_name='''||upper(p_tname)||'''
order by column_id';
else
vCursor:='select SUBSTR(column_name,1,11),
case data_type
when ''DATE'' then ''D''
when ''NUMBER'' then ''N''
else ''C'' end ,
case data_type
when ''NUMBER'' then NVL(data_precision,22)
when ''DATE'' then 8
else data_length end,
case data_type
when ''NUMBER'' then data_scale
end ,
column_name
from all_tab_cols
where table_name='''||upper(p_tname)||'''
order by column_id';
END IF;
open col_cur for vCursor;
loop
i:=i+1;
fetch col_cur into
vFlds(i).name,vFlds(i).type,vFlds(i).length,vFlds(i).decimals,vFlds(i).fname;
exit when col_cur%notfound;
end loop;
close col_cur;
l_hdr.version :='03';
l_hdr.year :=to_number(to_char(sysdate,'yyyy'))-1900;
l_hdr.month :=to_number(to_char(sysdate,'mm'));
l_hdr.day :=to_number(to_char(sysdate,'dd'));
l_hdr.rec_len :=1; -- to be set later
l_hdr.no_fields :=vFlds.COUNT;
l_hdr.hdr_len :=to_char((l_hdr.no_fields*32)+33,'FM000x');
end;
procedure put_rows (p_tname IN varchar2,
p_where_clause in varchar2 default '1=1 ',
vRow in out rowarray,
vFlds in field_descriptor_array)
is
type rc is ref cursor;
cur rc;
i integer:=0;
vSelectList VARCHAR2(32767);
v_cur VARCHAR2(32767);
begin
for l in 1..vFlds.count loop
vSelectList:=vSelectList||ite(l!=1,'||','')||'utl_raw.cast_to_raw(rpad(NVL('|| case when
vFlds(l).type='N' then 'to_char(' end ||vFlds(l).fname||case when vFlds(l).type='N' then ')' end
||','' ''),'||vFlds(l).length||','' ''))';
end loop;
v_cur:='select '||vSelectList||' from '||p_tname||' where '||p_where_clause;
open cur for v_cur;
loop
i:=i+1;
fetch cur into vRow(i);
exit when cur%notfound;
end loop;
close cur;
end;
procedure dump_table (p_dir in varchar2,
p_file in varchar2,
p_tname in varchar2,
p_cnames in varchar2 default NULL,
p_where_clause in varchar2 default ' 1=1 ')
is
l_hdr dbf_header;
vFlds field_descriptor_array;
vRow rowarray;
v_cnames VARCHAR2(4000);
v_outputfile UTL_FILE.FILE_TYPE;
vCount int;
vStartTime DATE;
vEndTime DATE;
begin
vStartTime:=sysdate;
put_header(p_tname,p_cnames,l_hdr,vFlds);
put_rows(p_tname,p_where_clause,vRow,vFlds);
v_outputfile := utl_file.fopen(p_dir,p_file,'w',32767);
for i in 1..vFlds.count loop
l_hdr.rec_len:=l_hdr.rec_len+vFlds(i).length;
end loop;
l_hdr.rec_len :=to_char(to_number(l_hdr.rec_len),'FM000x');
l_hdr.rec_len :=substr(l_hdr.rec_len,-2)||
substr(l_hdr.rec_len,1,2);
l_hdr.no_records :=to_char(vRow.count,'FM0000000x');
l_hdr.no_records:=substr(l_hdr.no_records,-2)||
substr(l_hdr.no_records,5,2)||
substr(l_hdr.no_records,3,2)||
substr(l_hdr.no_records,1,2);
l_hdr.hdr_len:=substr(l_hdr.hdr_len,-2)||
substr(l_hdr.hdr_len,1,2);
utl_file.put_raw(v_outputFile,
rpad(l_hdr.version||to_char(l_hdr.year,'FM0x')||to_char(l_hdr.month,'FM0x')||
to_char(l_hdr.day,'FM0x')||l_hdr.no_records||l_hdr.hdr_len||
l_hdr.rec_len,64,'0'));
for i in 1..vFlds.count loop
utl_file.put_raw(v_outputFile,utl_raw.cast_to_raw(vFlds(i).name)||replace(rpad('00',12-length(vFlds(
i).name),'#'),'#','00')||
utl_raw.cast_to_raw(vFlds(i).type)||'00000000'||
to_char(vFlds(i).length,'FM0x')||'000000000000000000000000000000');
end loop;
-- terminator for the field names
utl_file.put_raw(v_outputFile,'0D');
for i in 1..vRow.count loop
utl_file.put_raw(v_outputfile,'20'||vRow(i),TRUE);
end loop;
if utl_file.IS_OPEN(v_outputFile ) then
UTL_FILE.FCLOSE(v_outputFile);
end if;
vEndTime:=sysdate;
dbms_output.put_line('Started - '||to_char(vStartTime,'HH24:MI'));
dbms_output.put_line('Finished - '||to_char(vEndTime,'HH24:mi'));
dbms_output.put_line('Elapsed - '||to_char((vEndTime-vStartTime),'hh24:mi'));
exception
when others then
utl_file.fclose(v_outputFile);
raise;
end;
end;when i try to execute this package i am getting the below error.
begin
dbase_pkg.dump_table('NFO_ARCHIVE_DIR','test.dbf','all_objects','owner,object_name,subobject_name,o
bject_id','rownum<100');
end;
Connecting to the database nfo.
ORA-01481: invalid number format model
ORA-06512: at "NFO.DBASE_PKG", line 569
ORA-06512: at line 14
Started - 19:15
Finished - 19:15
Process exited.
Disconnecting from the database nfo.Please help me out in this.
Thanks in advance
Cheers,
ShanIt is probably te calculation/formatting of your ELAPSED dbms_output.put_line.
It isn't showing and also:
SQL> select to_char((sysdate-(sysdate-1)),'hh24:mi') from dual;
select to_char((sysdate-(sysdate-1)),'hh24:mi') from dual
ERROR at line 1:
ORA-01481: invalid number format modelBut just check line 569 to make sure. -
How can determine the duration of an audio file
Hi,
i'm making speech recognition project using java. i can capture sound by michrophone and playback it. the type of the sound is .wav.
my question is how to determine the recording duration (for example: determine the duration for 0.25 second) because i must recorded syllable,the assumtion is needed less than 0.5 second to say a word. and then.. can this wav file be changed into... binary or integer ?
thanks for your helpFYI I'm newbie in Java,
Yes, I want to limit the audio file lengths to a certain value. What should I do?
I have read wav file using an AudioInputStream, for audio format I'm using sampleRate = 8000.0F and sampleSizeInBits = 8. My friend said if I used 8 bit for sampleSizeInBit (resolution in amplitude) the binary file should be -128 till 127 in range according to the bit. I try this code and the result is binary file which is zero, positive, and negative.
the code
try{
audioInputStream.read(audioBytes);
for (int i=0; i<audioBytes.length;i++){
System.out.println(audioBytes);
catch(IOException ioe){
System.out.println("Error"+ioe);
I'm not sure that the code is right, but it works. Should I change the audioBytes into integer before? I try this code but it doesn't work.
int a = new int[audioBytes.length];
for (int i=0;i<audioBytes.length;i++){
a[i] = new Byte(audioBytes[i]).intValue();
this binary file has been printed in console (Im using texpad). So, how I can write that to file.txt? I'm using FileoutputStream but it doesn't work correctly. Here is the code
FileOutputStream fos = new FileOutputStream(strFilePath);
{audioInputStream.read(audioBytes);
fos.write(audioBytes);
catch (IOException ioe){
System.out.println("Error"+ioe);
When I open file .txt is read as character like this üüüüüüüüüüüüûüüüûüûûû ||||||| ||||||| ||||||||||||||||||||||||||||| ےےےےےےےےےےےے (like using alt plus number)
Edited by: anasfr on Mar 19, 2009 9:02 AM -
hi,
I met a problem when I try to store data to text file. That's the description:
- To read some objects from the binary file
- To update object state, do some operations
- To output some properties of the object to text format file.
That's a part of my code:
{color:#0000ff}
//------ begin ---------//
// ".cluto" is a binary file where store the number of objects as the first object and a set of docVector object
File cluto = new File (config3.getOutputPath (), config3.getOutputFileType () + ".cluto");
ObjectInputStream reader1 = new ObjectInputStream (new BufferedInputStream (new FileInputStream (cluto), BUFFERSIZE));
// output text file definition
File rlabel = new File (config3.getOutputPath (), config3.getOutputFileType () + ".rlabel");
BufferedWriter rlabelWriter = new BufferedWriter (new FileWriter (rlabel), BUFFERSIZE);
// get the first object in ".cluto", the number of objects in the input binary file
Integer vectorNumber = (Integer)reader1.readObject ();
// temporal variable
docVector tVector;
for(int i=0; i<vectorNumber.intValue (); i++) {
tVector = (docVector)reader1.readObject ();
tVector.dimsPruning ();
tVector.updateDimsWeight (config1.getWEIGHT_TYPE ());
rlabelWriter.write (tVector.vectorLableToString ()); // tVector.vectorLabelToString() return a string!
reader1.close ();
rlabelWriter.flush ();
rlabelWriter.close ();
//------ end --------//
{color}
The program works and creates the file ".rlabel", but a binary file instead of text file! Anyone have ideas about this problem?
ThanksSorry, I means ".rlabel".
Good news, I fix the problem. In my old code, the I/O stream keeps always opened when I write huge data to the file (See "for" loop). And after the loop, I close the stream.
Now, I open and close the stream in each loop. The problem is resolved.
that's the new code:
// this function is used to write a string to a file
/* write a string to a file */
public void writeStringToFile (String content, String fileName) {
BufferedWriter writer = null;
try {
writer = new BufferedWriter (new FileWriter (fileName, true));
writer.write (content);
writer.close ();
} catch (IOException ex) {
ex.printStackTrace ();
String rlabel = "test.rlabel";
for(int i=0; i<vectorNumber.intValue (); i++) {
tVector = (docVector)reader1.readObject ();
tVector.dimsPruning (prunedTerm);
writeStringToFile (tVector.vectorLableToString (), rlabel);
//------- end ----------//
I don't know whether the open/close operations in a large loop cost a lot.
Thanks -
EJB 3 Problem: "Collections having FK in secondary table"
Hi,
working with EJB 3 first time, i have to develop my entites by a DB scheme. It is very complex and now I have the problem using composite FK / PK in many tables, so that a refenrenced key has as the same time both properties: FK / PK.
Example: +3 Tables: Mxx:={M_ID as PK},
Lxx:={M_ID as PK / FK from Mxx},
Zxx:={M_ID as PK/FK from Lxx}+
I have built Mxx.java, Lxx.java with @EmbeddedId from LxxPK.java (works) and the problem table Zxx.java with @EmbeddedId from ZxxPK.java which includes a reference key and same time primary key from Lxx.
How can I solve the problem of "having FK in secondary table"?
THX for your help!I will specify my problem a little more:
I have an Entity named zuordnung_module_lv_ma this references an Entity lehrveranstaltungen.
In zuordnung_module_lv_ma :
@OneToMany
@JoinColumn(...)
public Collection<lehrveranstaltungen> lv = new ArrayList...
For more detailed information look at this:
my only problem is the table zuordnung_module_lv_ma
SQL:
create table BLOECKE
BLOCKID integer not null,
WOCHENTAG integer,
ZEIT_VON datetime,
ZEIT_BIS datetime,
LOGISCHGELOESCHT char not null,
constraint PK_BLOECKE primary key (BLOCKID)
create table CURRICULA
CURRICULUMKUERZEL varchar not null,
CURRICULUMBEZ varchar,
CURRICULUMID integer not null,
IDVERANTWORTLICHERSTUDIENDEKAN integer,
LOGISCHEGELOESCHT char not null,
constraint PK_CURRICULA primary key (CURRICULUMID)
create table LEHRVERANSTALTUNGEN
MODUL_ID integer not null,
LV_ID integer not null,
SWS double not null,
ART_LV char not null,
LOGISCHGELOESCHT char not null,
MAXTEILNEHMER integer not null,
constraint PK_LEHRVERANSTALTUNGEN primary key (MODUL_ID, LV_ID, ART_LV)
create table MITARBEITER
MA_ID integer not null,
TITEL varchar,
NACHNAME varchar not null,
VORNAME varchar not null,
AUFGELAUFENESWS integer,
LETZTESFORSCHUNGSFREISEMESTER varchar,
BEGINNANSTELLUNG date,
ENDEANSTELLUNG date,
MA_KUERZEL varchar not null,
GEBURTSDATUM date,
LOGISCHGELOESCHT char not null,
MA_KATEGORIE_ID integer,
constraint PK_MITARBEITER primary key (MA_ID)
create table MITARBEITERKATEGORIE
MA_KATEGORIE_ID integer not null,
MA_KATEGORIE_BEZ varchar not null,
constraint PK_MITARBEITERKATEGORIE primary key (MA_KATEGORIE_ID)
create table MODULE
MODUL_ID integer not null,
MODULKUERZEL varchar not null,
IDMODULVERANTWORLTICHER integer not null,
IDSTVMODULVERANTWORTLICHER integer,
CURRICULUMID integer,
ID_MODULGRUPPE varchar,
MODULNAME varchar,
PFLICHTFACH binary,
SEMESTER integer,
CREDITPOINTS integer,
LOGISCHGELOESCHT char not null,
constraint PK_MODULE primary key (MODUL_ID)
create table MODULGRUPPEN
CURRICULUMID integer not null,
ID_MODULGRUPPE varchar not null,
MODULGRUPPENBEZ varchar,
LOGISCHGELOESCHT char not null,
constraint PK_MODULGRUPPEN primary key (CURRICULUMID, ID_MODULGRUPPE)
create table ZUORDNUNG_MODUL_CURRICULUM
MODUL_ID integer not null,
CURRICULUMID integer not null,
LOGISCHGELOESCHT char not null,
constraint PK_ZUORDNUNG_MODUL_CURRICULUM primary key (MODUL_ID, CURRICULUMID)
create table ZUORDNUNG_MODUL_LV_MA_SEMINAR_EXEMPLAR
MODUL_ID integer not null,
LV_ID integer not null,
ART_LV char not null,
SEMESTER varchar not null,
MA_ID integer not null,
WOCHE_G_U char not null,
BLOCKID integer not null,
EXEMPLAR varchar not null,
LOGISCHGELOESCHT char not null,
constraint PK_ZUORDNUNG_MODUL_LV_MA_SEMIN primary key (MODUL_ID, LV_ID, ART_LV, SEMESTER, MA_ID, WOCHE_G_U, BLOCKID, EXEMPLAR)
alter table CURRICULA
add constraint FK_CURRICUL_ZURODNUNG_MITARBEI foreign key (IDVERANTWORTLICHERSTUDIENDEKAN)
references MITARBEITER (MA_ID)
on update restrict
on delete restrict;
alter table LEHRVERANSTALTUNGEN
add constraint FK_LEHRVERA_ZURODNUNG_MODULE foreign key (MODUL_ID)
references MODULE (MODUL_ID)
on update restrict
on delete restrict;
alter table MITARBEITER
add constraint FK_MITARBEI_ZUORDNUNG_MITARBEI foreign key (MA_KATEGORIE_ID)
references MITARBEITERKATEGORIE (MA_KATEGORIE_ID)
on update restrict
on delete restrict;
alter table MODULE
add constraint FK_MODULE_ZUORDNUNG_MITARBEI foreign key (IDMODULVERANTWORLTICHER)
references MITARBEITER (MA_ID)
on update restrict
on delete restrict;
alter table MODULE
add constraint FK_MODULE_ZUORDNUNG_MODULGRU foreign key (CURRICULUMID, ID_MODULGRUPPE)
references MODULGRUPPEN (CURRICULUMID, ID_MODULGRUPPE)
on update restrict
on delete restrict;
alter table MODULGRUPPEN
add constraint FK_MODULGRU_ZUORDUNG__CURRICUL foreign key (CURRICULUMID)
references CURRICULA (CURRICULUMID)
on update restrict
on delete restrict;
alter table ZUORDNUNG_MODUL_CURRICULUM
add constraint FK_ZUORDNUN_ZUORDNUNG_CURRICUL foreign key (CURRICULUMID)
references CURRICULA (CURRICULUMID)
on update restrict
on delete restrict;
alter table ZUORDNUNG_MODUL_CURRICULUM
add constraint FK_ZUORDNUN_ZUORDNUNG_MODULE foreign key (MODUL_ID)
references MODULE (MODUL_ID)
on update restrict
on delete restrict;
alter table ZUORDNUNG_MODUL_LV_MA_SEMINAR_EXEMPLAR
add constraint FK_ZUORDNUN_ZUORDNUNG_BLOECKE foreign key (BLOCKID)
references BLOECKE (BLOCKID)
on update restrict
on delete restrict;
alter table ZUORDNUNG_MODUL_LV_MA_SEMINAR_EXEMPLAR
add constraint FK_ZUORDNUN_ZURODNUNG_LEHRVERA foreign key (MODUL_ID, LV_ID, ART_LV)
references LEHRVERANSTALTUNGEN (MODUL_ID, LV_ID, ART_LV)
on update restrict
on delete restrict;
alter table ZUORDNUNG_MODUL_LV_MA_SEMINAR_EXEMPLAR
add constraint FK_ZUORDNUN_ZURODNUNG_MITARBEI foreign key (MA_ID)
references MITARBEITER (MA_ID)
on update restrict
on delete restrict;
THX for prospective comments! -
Hi All,
I am using Timestamp column in sql server 2008.
In sql 2008 the query is working fine but from query template i am getting following error:
com.sap.xmii.Illuminator.logging.LHException: Error occurred while processing records; The conversion from binary to INTEGER is unsupported.
Is this a bug of mii?
or i am miss something.
I am using Version 12.1.8 Build(20).
Regards
AnshulHi Anshul,
Just to check, but what service pack is your NWCE installation?
And can you please post the exact SQL Script in the Query? Please make it a Fixed Query instead of Query mode if you are not already. Or are you trying to an update or insert query?
NOTE: Going to Fixed Query can be a useful troubleshooting tool, but does require better database skills for complex queries.
Regards,
Mike
Edited by: Michael Appleby on Jul 6, 2011 1:23 PM -
Oracle 9.2.0.6
anyway to bulk collect into a varray where the index is part what is returned by the bulk collect?
declare
type tv is varray(5) of int;
tvList tv := tv(0,0,0,0,0);
begin
select l,v bulk collect into tvList
from
(select 1 as l,2 as v from dual
union
select 2,3 from dual);
end;
also tried:
declare
type tv is varray(5) of int;
tvList tv := tv(0,0,0,0,0);
begin
select l,v bulk collect into tvList(l)
from
(select 1 as l,2 as v from dual
union
select 2,3 from dual);
end;
neither worked.
ThanksYou can't do that; bulk collect only copies into consecutive entries, indexed by an integer.
If you use varrays, all the return values must fit in the varray's declared size. Elements are inserted starting at index 1, overwriting any existing elements.
See http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14261/tuning.htm#sthref2213
The workaround is to
- bilk collect into an associative array indexed by binary/pls integer
- after the bulk collect, copy the data into your varray
- do this in batches using BULK COLLECT ... LIMIT ,,,
Sounds painful, but the bulk collect can really save time on context switching between PL/SQL and SQL.
HTH
Regards Nigel
Maybe you are looking for
-
[solved] Pulseaudio doesn't detect sound card after system update
Before a recent system update I had pulseaudio working flawlessly, but after doing a recent system update with pacman -Syu, pulseaudio no longer sees my sound card and only displays the Dummy audio device. Among the packages updated that I can remem
-
Line saver and Monthly line bill
Hi Just had a bill for a months line rental and charges because no one at BT thought to notify me that last years line saver contract was about to expire. A little note on the last bill or a simple e-mail would have been enough, but nothing. So now a
-
GR Printing ssue.......
When we take GR by 101 MVT we can take the printout...... But when we take GR BY 531 MVT..........we are not able to take printout......... What might be missing...... Does it require any config...... Plz. suggest..... Rgds, navin
-
How to identify that a host is connected to which particular edge switch
Hello Guys Can anybody explain how to identify that a host is connected to which particular edge switch and port in a Cisco SAN Fabric ??
-
I want to become a iOS developer but im not to sure about the codes, is it easy to learn how to build a app? any assistaces or advice to get started? could you explain about the codes?