Issue in data Extraction , Source tables having columns wth lengthe 60
Hi BI Experts ,
Here I have a issue while extracting the data from Oracle tables. I encountered some columns for which the length of character stream is more than 60 , some where around 200 to 300 , for example : Reason for some action , Comments , discription .
I am not able to to treat them as master data text since these fileds are coming with the Transaction data . In SAP BI we can have the data type CHAR with length max to 60 . Now how can I deal this situation in a better way ??
Could you please come up with your ideas .
Expecting interesting solutions
Anurag
Hello Charan,
first check this Blog:
http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417800)ID0294722750DB10878770002327649734End?blog=/pub/wlg/3705
It may helps already.
Anouter methode is to report from PSA Tables. But here no How-to is available.
Br.
Joerg
Similar Messages
-
Parse SQL query and extract source tables and columns
Hello,
I have a set of SQL queries and I have to extract the source tables and columns from them.
For example:
Let's imagine that we have two tables
CREATE TABLE T1 (col1 number, col2 number, col3 number)
CREATE TABLE T2 (col1 number, col2 number, col3 number)
We have the following query:
SELECT
T1.col1,
T1.col2 + T1.col3 as field2
FROM T1 INNER JOIN T2 ON T1.col2=T2.col2
WHERE T2.col1 = 1
So, as a result I would like to have:
Order Table Column
1 T1 col1
2 T1 col2
2 T1 col3
Optionally, I would like to have a list of all dependency columns (columns used in "ON", "WHERE" and "GROUP BY" clauses:
Table Column
T1 col2
T2 col1
T2 col2
I have tried different approaches but without any success. Any help is appreciated. Thank you in advance.
Best regards,
BeroetzI have a set of SQL queries and I have to extract the source tables and columns from them. In a recent db version you can use Re: sql injection question for this.
-
Insertion in Table having Column of object array type
Hi!
I want to make an object type and then the VARRAY of the that object type and then make table having column of VARRAY type.
How can i issue an Insert statement to insert values in columns of VARRAY type .
I will be thankful.
regards
ImranSee following discussion http://asktom.oracle.com/pls/ask/f?p=4950:8:11071256505039606339::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1583117527730
and let's thank again Tom ! -
Identify source tables and columns
I would appreciate it if someone could point me toward an OWB table(s) or view(s) that will identify which source tables and columns are being used as input to an OWB map.
Thanks,
Tom RehageHi Tom,
Have you checked Appendix D in the User Guide?
Look at "Warehouse Builder Design Repository Public Views" under Warehouse Builder Public Views.
Not sure whether you can find the distinction between source or target though.
Good luck, Patrick -
Extract data from source table in ODI
Hi,
I am using ODI to transfer data between Oracle table to a flat file. Source database is very huge and has many partitions within that. In ODI can we extract the data using the partitions ? I want to have data from some partitions of that table, i dont have any other WHERE clause to use.
Thanks,
Ramesh
Edited by: rameshchandra85 on Jun 22, 2009 4:34 AMHi,
I am tranferring data from oracle to File . So I using IKM SQL to FILE Append. There is no LKM used for that. Can you please suggest where can I add that ?
Thanks,
Ramesh -
To Select Data from Mutiple Table having different column name
Dear All,
I have 100 tables under one user Operation, each table has column containg the code of the Data Entry Operator, But name of the column in each table is different.
Can anyone give the script to run from SQL PLUS so the we can find the total entry of each D.E. Operator per month.
Thanks and Regards,
ManojPerhaps the UNION ALL operator is what you want.
SELECT data_entry_operator, count(*) AS qty
FROM(
SELECT de_oper AS data_entry_operator FROM a
UNION ALL
SELECT d_e_operator FROM b
UNION ALL
SELECT d_e_op FROM c
GROUP BY data_entry_operator; -
Data from 3 tables having latest dates
Hi,
Need some help with PL/SQL code, I need to write a code which will get data from 3 tables all with the latest date.
For a particular ACT_CODE the output of the SQL query should show data having the latest dates from 3 tables, if there is no
date in the table, it should show the remaining data (think left join will do the trick here)
Table Names:
Institution_UPDT aiu
ASQ_CONTACT ac
GR_AUTHORIZE gr
All 3 tables have ACT_Code as common
Column Names
INSTITUTION_UPDT aiu -- aiu.ACT_CODE,aiu.project_id as proj,aiu.UPDT_TYPE_ID, aiu.USER_ID, aiu.UPDT_DATE
ASQ_CONTACT ac -- ac.ACT_CODE as contact_code,ac.project_id,ac.first_name, ac.middle_initial,ac.last_
name,ac.title,ac.status,ac.status_date
GR_AUTHORIZE gr --gr.ACT_CODE as grad_code,gr.name, gr.title AS grad_title, gr.submit_date
The date column names are
ac.status_date,
aiu.UPDT_DATE and
gr.submit_date
Thank you everyone
appreciate your help
JeshHi, Jesh,
user11095252 wrote:
That is correct, I want to include all the columns from ASQ_Contacts, Institution_UPDT and GR_AUTHORIZEOh! You want all columns from all three tables, not just ASQ_Contacts. That changes the problem considerably!
UNION requires that all prongs have the same number of columns, and that the datatypes of the columns match. That's no problem if we just need act_code and a date from each one. If we just need additional columns from one table, it's easy to add literal NULLs to the other prongs to serve as the additional columns. But if we need all (or even several) columns from all three tables, that's no good. So let's revert to your original idea: outer joins.
I want to display only one row which has the latest date with the most recently updated time (example:mm/dd/yyyy hr:min:sec am/pm)Yes, but what if there is a tie for the most recently updated time?
In case of a tie, the query below will pick one of the contenders arbitrarily. That may be fine with you (e.g., you may have UNIQUE constraints, making ties impossible). If you need a tie-breaker, yiou can add more columns to the analytic ORDER BY clauses.
WITH aiu AS
SELECT institution_updt.* -- or list columns wanted
, ROW_NUMBER () OVER ( PARTITION BY act_code
ORDER BY updt_date DESC
) AS r_num
FROM institution_updt
WHERE act_code = :p1_act_code
AND project_id = :p2_project_id
, ac AS
SELECT asq_contact.* -- or list columns wanted
, ROW_NUMBER () OVER ( PARTITION BY act_code
ORDER BY status_date DESC
) AS r_num
FROM asq_contact
WHERE act_code = :p1_act_code
AND project_id = :p2_project_id
, gr AS
SELECT gr_authorize.* -- or list columns wanted
, ROW_NUMBER () OVER ( PARTITION BY act_code
ORDER BY submit_date DESC
) AS r_num
FROM gr_authorize
WHERE act_code = :p1_act_code
SELECT * -- or list columns wanted
FROM aiu
FULL OUTER JOIN ac ON ac.act_code = aiu.act_code
AND ac.r_num = 1
AND aiu.r_num = 1
FULL OUTER JOIN gr ON gr.act_code = NVL (ac.act_code, aiu_act_code)
AND gr.r_num = 1
;That's a lot of code, so there may be typos. If you'd post CREATE TABLE and INSERT statements for a few rows of sample data, I could test it.
In all places where I said "SELECT *" above, you may want to list the individual columns you want.
If you do that in the sub-queries, then you don't have to qualify the names with the table name: that's only required when saying "SELECT *" with another column (r_num, in this case).
It's more likely that you won't want to say "SELECT *" in the main query. The three r_num columns, while essential to the query, are completely useless to your readers, and you might prefer to have just one act_code column, since it will be the same for all tables that have it. But since it may be NULL in any of the tables, you'll have to SELECT it like this:
SELECT COALESCE ( aiu.act_code
, ac.act_code
, gr_act_code
) AS act_codeThe query above will actually work for multiple act_codes. You can change the condidition to something like
WHERE act_code IN (&act_code_list)If so, remember to change it in all three sub-queries. -
Saving of data in a table having large number of records
Hi,
i'm working in forms 6i and database 10g.
i'm having two tables, stock_head and stock_detail.
The stock_detail table is having millions of records.
The stock_detail is having 3 database triggers.
the saving of data into these tables is very slow even after disabling the triggers.
can anyone please help me regarding this matter...
How to improve the performance?
please help me...As always the same thing applies to these type of queries
- No exact version numbers are provided
- The problem description is way too vague to resolve the issue
- The requestor doesn't read documentation
- The requestor didn't use online resources, and didn't search this forum
The central question always is
What is it waiting for
So you need to run ADDM and/or AWR reports provided you are properly licensed, or statspack when you don't have a license for AWR/ADDM.
Apart from that no help is possible, as the post didn't contain a problem description other than 'It doesn't work, help'
Sybrand Bakker
Senior Oracle DBA -
Hi All,
i am facing impact of trigger in MYSQL, scenario is this:
I am Having one table with duplicate records that is consist of (eid,tin,status and some other columns are also there but i need only these three). there is another table which is having same these three columns (eid, tin, status).
eid and tin will be same for given combination only status will be different i.e.
1245 23 0
1245 23 1
1245 23 5
1233 33 3
1211 24 2
1211 24 5
so as per above example i have to feed data into other table as
1245 23 0
1233 33 3
1211 24 5
priority of status is like 0 will be inserted if that is present in record otherwise it will be decrease from 5 to 1.
so i have designed trigger for this which will insert data after reading each row, but it is taking around 6.5 minutes for inserting 300000 records. so is there any other way to improve performance for this mysql program.
DELIMITER $$
CREATE
/*[DEFINER = { user | CURRENT_USER }]*/
TRIGGER `kyr_log`.`upd_status` AFTER INSERT
ON `kyr_log`.`kyrlog_bup`
FOR EACH ROW
BEGIN
DECLARE v_eid VARCHAR(28);
DECLARE v_status INT(11);
SELECT kyrl_eid,kyrl_status INTO v_eid,v_status FROM kyrlog_bup ORDER BY kyrl_id DESC LIMIT 1;
IF v_eid NOT IN (SELECT kyrl_eid FROM update_status.new_status) THEN
INSERT INTO update_status.new_status(kyrl_eid,kyrl_tin,kyrl_status)
SELECT kyrl_eid,kyrl_tin,kyrl_status FROM kyrlog_bup ORDER BY kyrl_id DESC LIMIT 1;
ELSE IF v_status=2 THEN
IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
END IF;
ELSE IF v_status=3 THEN
IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
END IF;
ELSE IF v_status=4 THEN
IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
END IF;
ELSE IF v_status=5 THEN
IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
END IF;
ELSE IF v_status=0 THEN
UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
END IF;
END IF;
END IF;
END IF;
END IF;
END IF;
END;
$$
DELIMITER ;
please suggest me if there is possibility of any other solution.
thanksactually you didn't have seen discussion on this link , there are many discussion related to MYSQL . and mysql is owned by oracle. so i post it here.
thanks for suggestion -
HI,
I have a source table with millions of records .I need to insert some of the data (depending on a condition) to a repository table.
Once they are inserted they can be deleted from the source table.
The deletion is taking a lot of time .
I need to reduce the time to delete the records.
ex:- 1 million records in 8 seconds.
Had already used bulk collect and cursors but cannot succeed.
Please suggest how to increase the performance.
Thanks & RegardsAPPROACH 1:-
CREATE OR REPLACE PROCEDURE SP_BC
AS
DETAILS_REC SOURCETBL%ROWTYPE;
COUNTER NUMBER:=1;
RCOUNT NUMBER:= 1;
START_TIME PLS_INTEGER;
END_TIME PLS_INTEGER;
CURSOR C1 IS
SELECT * FROM SOURCETBL WHERE DOJ<SYDATE;
BEGIN
START_TIME := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE(START_TIME/100);
OPEN C1;
LOOP
FETCH C1 INTO DETAILS_ROW;
EXIT WHEN C1%NOTFOUND;
BEGIN
EXIT WHEN COUNTER >10000;
INSERT INTO DESTINATIONTBL VALUES DETAILS_REC;
IF SQL%FOUND THEN
DELETE FROM SOURCETABLE WHERE ID= DETAILS_REC.ID;
COUNTER:=COUNTER+1;
END IF;
COMMIT;
END;
COUNTER:=1;
END LOOP;
COMMIT;
END;
APPROACH 2:-
CREATE OR REPLACE PROCEDURE SP_BC1
IS
TYPE T_DET IS TABLE OF SOURCETBL%ROWTYPE;
T_REC T_DET;
BEGIN
SELECT * BULK COLLECT INTO T_REC FROM SOURCETBL
WHERE NAME=@NAME;
FOR I IN T_REC .FIRST ..T_REC .LAST
LOOP
INSERT INTO DESTINATIONTBL VALUES T_REC (I);
IF SQL%FOUND THEN
DELETE FROM SOURCETBL WHERE ID =
WHERE ID = T_REC (I).ID;
END IF;
EXIT WHEN T_REC=0;
END LOOP;
COMMIT;
END;
APPROACH 3:-
CREATE OR REPLACE PROCEDURE SP_BC2
AS
TYPE REC_TYPE IS TABLE OF SOURCETBL%ROWTYPE ;
DETAILS_ROW REC_TYPE;
CURSOR C1 IS
SELECT * FROM
SOURCETBL WHERE END<SYSDATE;
BEGIN
OPEN C1;
LOOP
FETCH C1 BULK COLLECT INTO DETAILS_ROW LIMIT 999;
FORALL I IN 1..DETAILS_ROW.COUNT
/* A BATCH OF 999 RECORDS WILL BE CONSIDERED FOR DATA MOVEMENT*/
INSERT INTO DESTINATIONTBL VALUES DETAILS_ROW(I);
-- IF SQL%FOUND THEN
-- DELETE from SOURCETBL WHERE ID IN DETAILS_ROW(I).ID;
-- END IF;
EXIT WHEN C1%NOTFOUND;
COMMIT;
END LOOP;
COMMIT;
3rd approach seems better but i have an issue with referring the fileds of a record type. -
How to input data into a table with columns?
I am trying to input data into a table. My table have 5 columns and an unlimited amount of rows. I created a program in LabView that enters the data into the table but it enters all of the data in one row. I would like to enter the first set of information into the first column, the second set of info into the second column and so on. I am including a copy of the program that I am working with. I would like the number of runs to be put into the first column (it should count down like number 5 in first row, number 4 in second row, number 3 in third row, and so on). I would like the applied voltage to be placed in the second column, and so on. Any help or information will be greatly appreciated. I am working with LabView Version 6.1 and 8.0. I am submitting the vi from 6.1.
Attachments:
FJ-PROGRAM.vi 68 KBPondered,
I looked at your code and I think you might be making things too complicated. I've included a very simple example that demonstrates how to write a 2D array of integers to a table. Hope you find this helpful. It is in LV 7.1.
Chris C
Chris Cilino
National Instruments
LabVIEW Product Marketing Manager
Certified LabVIEW Architect
Attachments:
rows - columns.vi 17 KB -
Data in multiple tables and columns
Hi,
How to find if multiple tables are containing the same data?
'Apple' text is in three different tables and under three different column names.
Expected result
fruits_one
fruits_two
fruits_three
Select name, quantity from fruits_one;
Apple 1
Orange 1
Pear 1
select flavour, desc from fruits_two;
Red, Apple
Blue, Berry
select order,date,details from fruits_three;
101 11/11/2011 Grapes
102 12/01/2010 AppleThanks
SandySQL> create table fruits_one (name varchar2(100), quantity number);
Table created.
SQL> insert into fruits_one
2 select 'Apple' name, 1 quantity from dual union all
3 select 'orange' name, 1 quantity from dual;
2 rows created.
SQL> commit;
Commit complete.
SQL> create table fruits_two (flavour varchar2(100), des varchar2(100));
Table created.
SQL> insert into fruits_two
2 select 'Red' flavour, 'Apple' des from dual union all
3 select 'blue' , 'berry' from dual ;
2 rows created.
SQL> commit;
Commit complete.
SQL> set serveroutput on
SQL> declare
2 l_search varchar2(10) := 'APPLE';
3 l_cnt number := 0;
4 begin
5
6 for x in (select column_name, data_type, table_name from user_tab_cols where data_type in ('VAR
CHAR2'))
7 loop
8
9 execute immediate 'select count(*) from "' || x.table_name ||'" where upper("' || x.column_n
ame || '") like ''%' || l_search || '%''' into l_cnt;
10
11 if l_cnt > 0 then
12 dbms_output.put_line('table = "' || x.table_name ||'", column = "' || x.column_name ||'"');
13 end if;
14
15 end loop;
16
17 end;
18 /
table = "FRUITS_ONE", column = "NAME"
table = "FRUITS_TWO", column = "DES"
PL/SQL procedure successfully completed.
SQL> -
How to show data from a table having large number of columns
Hi ,
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.user2602680 wrote:
Please update your forum profile with a real handle instead of "user2602680".
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.
Yes, this can be achieved using a custom named column report template. -
Performance issues while query data from a table having large records
Hi all,
I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
SELECT SUM (B.BASE_TRANSACTION_VALUE)
FROM
MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A
WHERE A.ORGANIZATION_ID = B.ORGANIZATION_ID
AND A.ORGANIZATION_ID = :b1
AND B.REFERENCE_ACCOUNT = A.MATERIAL_ACCOUNT
AND B.TRANSACTION_DATE <= LAST_DAY (TO_DATE (:b2 , 'MON-YY' ) )
AND B.ACCOUNTING_LINE_TYPE != 15
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.02 0.05 0 0 0 0
Fetch 3 134.74 722.82 847951 1003824 0 2
total 7 134.76 722.87 847951 1003824 0 2
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 193 (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
1 1 1 SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
788242 788242 788242 NESTED LOOPS (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
1 1 1 TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
1 1 1 INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
788242 788242 788242 TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
8704356 8704356 8704356 INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
788242 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_PARAMETERS' (TABLE)
1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
788242 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_TRANSACTION_ACCOUNTS' (TABLE)
8704356 INDEX MODE: ANALYZED (RANGE SCAN) OF
'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
row cache lock 29 0.00 0.02
SQL*Net message to client 2 0.00 0.00
db file sequential read 847951 0.40 581.90
latch: object queue header operation 3 0.00 0.00
latch: gc element 14 0.00 0.00
gc cr grant 2-way 3 0.00 0.00
latch: gcs resource hash 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
gc current block 3-way 1 0.00 0.00
********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
Is there any way I can improve the performance of this query?
Regards
Edited by: mhosur on Dec 10, 2012 2:41 AM
Edited by: mhosur on Dec 10, 2012 2:59 AM
Edited by: mhosur on Dec 11, 2012 10:32 PMCREATE INDEX mtl_transaction_accounts_n0
ON mtl_transaction_accounts (
transaction_date
, organization_id
, reference_account
, accounting_line_type
/:p -
Insertion a record in a table having columns of different charsets using OLEDB
My development environment -
Database -> Microsoft SQL Server
2008 R2
OS -> Windows Server
2008 R2
Database Charset -> Chinese_PRC_CI_AS (Windows
936)
Operating System Charset -> Chinese
Below table is having varchar fields with different charsets.
create
table dbo.tcolcs1 (
c1 int
not
null
primary
key,
c2 varchar(30)
collate SQL_Latin1_General_Cp1_CI_AS ,
c3 varchar(30)
collate Chinese_PRC_CI_AS
I want to insert below record using OLEDB APIs provided by Microsoft. Just for information, character '0x00C4' does not
belong to Windows 936 codepage.
insert
into dbo.tcolcs1
values (10,
NCHAR(0x00C4), NCHAR(0x4EBC))
Code snippet -
DBPARAMBINDINFO bind_info
memset(&bind_info,
0, sizeof(DBPARAMBINDINFO));
bind_info.pwszDataSourceType = L"DBTYPE_VARCHAR";
bind_info.wType = DBTYPE_STR;
I have bound the varchar field with DBTYPE_STR. I can see that my code is not inserting Latin1 character (0x00C4) correctly
into the table. The code always inserts a blank character into Latin1 column (c2) and 0x4EBC into Chinese column (c3).
Later, I changed the binding from DBTYPE_STR to DBTYPE_BYTES as below -
bind_info.pwszDataSourceType = L"DBTYPE_BINARY";
bind_info.ulParamSize =
0;
bind_info.wType = DBTYPE_BYTES;
With the above change, I observed that OLEDB is converting hex value to string. It is inserting 0x00C4 as 'C4' and 0x4EBC
as '4EBC'. I also tried with adding 'AutoTranslate=no' in driver connection string, but it did not help. How can I insert above record with OLEDB in the above table ?
Thanks in advance.Did you try making fields as unicode?
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page
Maybe you are looking for
-
How to Handle the LOV AM in the page level Controller
Hi All, I have standard page ,In that page there is LOV in which user can select any value. But the client requirement is the value in the lov has to be displayed has to be displayed automatically the first record of the LOVwhen page is rendered. For
-
Text doesn't flow in some chapters
I purchased a third party template for my gardening and photography book. Most of my chapters are 2 -5 pages long. I select a chapter in "Layouts" and paste my copy from Word and add my photos. Sometimes the copy appears on the second page in the Boo
-
I have 9" classic ipad. may i connect ipad to separate 21" monitor and use it as main monitor?
-
My ipod touch will not sync on my computer. I get an unknown error (13109). It appears to be syncing but this message comes up. Has anyone encountered this problem?
-
Using USB-6259 with LABVIEW 8.0
Hello, I currently have Labview 8.0 and am trying to use the USB-6259 DAQ device. When trying to install the device I am told that the driver cannot be found. I have looked through the knowledge base, forums, and reference this document: Device Dri