Check either String data is ASCII or Unicode
Hi,
How can we check String(Array of Characters) data is ASCII or Unicode using java.
Please reply immediately.
Thanx in advance.
Hi,
How can we check String(Array of Characters) data
is ASCII or Unicode using java.
Please reply immediately.that's not the correct way to ask :P
>
>
Thanx in advance.
Similar Messages
-
Unicode and non-unicode string data types Issue with 2008 SSIS Package
Hi All,
I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
Thanks.What is Unicode and non Unicode data formats
Unicode :
A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
easier for both the parties.
To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
and retrieved accordingly to avoid any hiccups while doing business with the international customers.
The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
Encoding Formats:
Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
this mechanism, all Unicode characters are stored by using 2 bytes.
Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
provider all internally represent Unicode data as UCS-2.
The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
For example, if your business is using a website supporting ASP pages, then this is what happens:
If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
setting is not available on IIS 4.0 and Windows NT 4.0.
Sorting and other operations :
The effect of Unicode data on performance is complicated by a variety of factors that include the following:
1. The difference between Unicode sorting rules and non-Unicode sorting rules
2. The difference between sorting double-byte and single-byte characters
3. Code page conversion between client and server
Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
Non-Unicode :
Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
Now, let’s see some of the advantages of not storing the data in Unicode format:
1. It takes less space to store the data in the database hence we will save lot of hard disk space.
2. Moving of database files from one server to other takes less time.
3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
Non-Unicode vs. Unicode Data Types: Comparison Chart
The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
Non-Unicode
Unicode
(char, varchar, text)
(nchar, nvarchar, ntext)
Stores data in fixed or variable length
Same as non-Unicode
char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
nchar: same as char
varchar: stores actual value and does not pad with blanks
nvarchar: same as varchar
requires 1 byte of storage
requires 2 bytes of storage
char and varchar: can store up to 8000 characters
nchar and nvarchar: can store up to 4000 characters
Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
Thanks Shiven:) If Answer is Helpful, Please Vote -
How to check particluar string in xml data
Dear oracle Experts.
Im using the following oracle database.
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
I have a following xml data..
p_msg_in CLOB :=
<DATA>
<FLD FNR="ZZ0584" DTE="26SEP12" SRC="DXB" DES="DAC" />
<AAA LBP="22334455" ETK="1234567/4" ACT="123" />
<AAA LBP="223344" ETK="2345678/1" />
<AAA LBP="223344" ETK="123456/1" ACT="345" />
</DATA>
then, im fetching header details like this..
v_msg_xml := xmltype(p_msg_in);
FOR i IN multicur(v_msg_xml, '/DATA/FLD') LOOP
v_1:= i.xml.extract('//@FNR').getstringval();
v_4:= to_date(i.xml.extract('//@DTE').getstringval(),'DDMONYY');
v_5:= i.xml.extract('//@SRC').getstringval();
v_6:= i.xml.extract('//@DES').getstringval();
END LOOP;
after this, I need to loop all the actual records one by one using this for loop. Here for each iteration , i need to check some string ( example ACT) is there or not. in the above example, records 1,3 have ACT value in it. So here i need to check some thing like this. if instr('<AAA LBP="223344" ETK="123456/1" ACT="345" />','ACT)>0 before I perform the below step.
How to achieve this.
I appreciate your help.
thank you.
FOR c IN multicur(v_msg_xml, '/DATA/AAA') LOOP
v_7 := c.xml.extract('//@LBP').getstringval();
v_8 := c.xml.extract('//@ETK').getstringval();SQL> DECLARE
2
3 p_msg_in clob := '<DATA>
4 <FLD FNR="ZZ0584" DTE="26SEP12" SRC="DXB" DES="DAC" />
5 <AAA LBP="22334455" ETK="1234567/4" ACT="123" />
6 <AAA LBP="223344" ETK="2345678/1" />
7 <AAA LBP="223344" ETK="123456/1" ACT="345" />
8 </DATA>';
9
10 v_msg_xml xmltype;
11
12 v_1 varchar2(30);
13 v_4 date;
14 v_5 varchar2(30);
15 v_6 varchar2(30);
16 v_7 varchar2(30);
17 v_8 varchar2(30);
18
19 BEGIN
20
21 v_msg_xml := xmltype(p_msg_in);
22
23 select fnr, to_date(dte, 'DDMONRR'), src, des
24 into v_1, v_4, v_5, v_6
25 from xmltable('/DATA/FLD'
26 passing v_msg_xml
27 columns fnr varchar2(30) path '@FNR'
28 , dte varchar2(30) path '@DTE'
29 , src varchar2(30) path '@SRC'
30 , des varchar2(30) path '@DES'
31 ) ;
32
33 dbms_output.put_line('V1 = '||v_1);
34 dbms_output.put_line('V4 = '||v_4);
35 dbms_output.put_line('V5 = '||v_5);
36 dbms_output.put_line('V6 = '||v_6);
37
38 for r in (
39 select lbp, etk
40 from xmltable('/DATA/AAA[@ACT]'
41 passing v_msg_xml
42 columns lbp varchar2(30) path '@LBP'
43 , etk varchar2(30) path '@ETK'
44 )
45 )
46 loop
47 dbms_output.put_line('LBP = '||r.lbp||' ETK = '||r.etk);
48 end loop;
49
50 END;
51 /
V1 = ZZ0584
V4 = 26/09/12
V5 = DXB
V6 = DAC
LBP = 22334455 ETK = 1234567/4
LBP = 223344 ETK = 123456/1
PL/SQL procedure successfully completed -
Hello,
I am working on one project and there is need to extract Sharepoint list data and import them to SQL Server table. I have few lookup columns in the list.
Steps in my Data Flow :
Sharepoint List Source
Derived Column
its formula : SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1))
Data Conversion
OLE DB Destination
But I am getting the error of not converting between unicode and non-unicode string data types.
I am not sure what I am missing here.
In Data Conversion, what should be the Data Type for the Look up column?
Please suggest here.
Thank you,
Mittal.You have a data conversion transformation. Now, in the destination are you assigning the results of the derived column transformation or the data conversion transformation. To avoid this error you need use the data conversion output.
You can eliminate the need for the data conversion with the following in the derived column (creating a new column):
(DT_STR,100,1252)(SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1)))
The 100 is the length and 1252 is the code page (I almost always use 1252) for interpreting the string.
Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com -
Column "A" cannot convert between unicode and non-unicode string data types
I am following the SSIS overview video-
https://secure.cbtnuggets.com/it-training-videos/series/microsoft-sql-server-2008-business-development/6143?autostart=true
I have a flat file that i want to import the contents onto a SQL database.
I created a Dataflow task, source file and oledb destination.
I am getting the folliwung error -
"column "A" cannot convert between unicode and non-unicode string data types"
in the origin file the data type is coming as string[DT_STR] and in the destination object it is coming as "Unicode string [DT_WSTR]"
I used a data conversion object in between, dosent works very well
Please help what to doI see this often.
Right Click on FlatFileSource --> Show Advanced Editor --> 'Input and Output Properties' tab --> Expand 'Flat File Source Output' --> Expand 'Output Columns' --> Select your field and set the datatype to DT_WSTR.
Let me know if you still have issues.
Thank You,
Jay -
Cannot convert between unicode and non-unicode string data types.
I'm trying to copy the data from 21 tables in a SQL 2005 database to a MS Access database using SSIS. Before converting the SQL database from 2000 to 2005 we had this process set up as a DTS package that ran every month for years with no problem. The only way I can get it to work now is to delete all of the tables from the Access DB and have SSIS create new tables each time. But when I try to create an SSIS package using the SSIS Import and Export Wizard to copy the SQL 2005 data to the same tables that SSIS itself created in Access I get the "cannot convert between unicode and non-unicode string data types" error message. The first few columns I hit this problem on were created by SSIS as the Memo datatype in Access and when I changed them to Text in Access they started to work. The column I'm stuck on now is defined as Text in the SQL 2005 DB and in Access, but it still gives me the "cannot convert" error.
I was getting same error while tranfering data from SQL 2005 to Excel , but using following method i was able to tranfer data. Hopefully it may also help you.
1) Using Data Conversion transformation
data types you need to select is DT_WSTR (unicode in terms of SQL: 2005)
2) derived coloumn transformation
expression you need to use is :
(DT_WSTR, 20) (note : 20 can be replace by your character size)
Note:
Above teo method create replica of your esting coloumn (default name will be copy of <coloumn name>).
while mapping data do not map actual coloumn to the destination but select the coloumn that were created by any of above data transformer (replicated coloumn). -
How to fix "cannot convert between unicode and non-unicode string data types" :/
Environment: SQL Server 2008 R2
Introduction:Staging_table is a table where data is being stored from source file. Individual and ind_subject_scores are destination tables.
Purpose: To load the data from a source file .csv while SSIS define table fields with 50 varchar, I can still transfer the data to the entity table/ destination and keeping the table definition.
I'm getting validation error "Cannot convert between a unicode and a non-unicode string data types" for all the columns.
Please helpHi ,
NVARCHAR = DT_WSTR
VARCHAR = DT_STR
Try below links:
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/ed1caf36-7a62-44c8-9b67-127cb4a7b747/error-on-package-can-not-convert-from-unicode-to-non-unicode-string-type?forum=sqlintegrationservices
http://social.msdn.microsoft.com/Forums/en-US/eb0d1519-4be3-427d-bd30-ae4004ea9e8d/data-conversion-error-how-to-fix-this
http://technet.microsoft.com/en-us/library/aa337316(v=sql.105).aspx
http://social.technet.microsoft.com/wiki/contents/articles/19612.ssis-import-excel-to-table-cannot-convert-between-unicode-and-non-unicode-string-data-types.aspx
sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **. -
How to send data in ASCII (instead of Unicode) from XI using JMS adapter
Hi
The scenario is R/3 > XI >MQ > Third party SW. Inside XI, we have an ABAP mapping as well.
The receiving third party software needs data in ASCII. Currently we are achieving this in MQ, but would like to move this conversion to XI.
Any thoughts on that are welcomed.
Thanks in advance.
Cheers
danus
Message was edited by:
Chidambaram DanusHello Stefan,
I have the same problem with data conversion to utf-8.
The mainframe application sends us the data in the codepage iso-8859-1, but the jms sender adapter transfers this in utf-8.
Can I stop this?
You have proposal a oss note 960663, but on which position must be setup this in the module sequence? At first?
What are the other options of the parameter conversion.charset ?
Is there one possibility without mapping?
Bye
Stefan -
I tried to use the config VIs to record some front-panel settings for later restoration, one of which could be a single space character (part of a string parsing system).
I soon discovered that whenever I tried to save that single-space value to an INI file, only a null string was saved.
After doing some digging I discovered that buried in the Write Key vi is a worker vi called Config Data Modify that uses Trim String on the string data before it is written to the file and that's what was eating my string character. I don't know whether this is a bug or a feature but there are at least three ways to fix it.
1) Assuming you want to leave the library VIs alone, you can pre-process any stings sent to "write key" to replace all spaces with "\20" and then post-process all strings read using "read key" to replace all instances of \20 with spaces.
and if you don't mind modifying the library VIs, either to save/use under a different name or to stick back into the library in a modified state (caution - can cause problems when you move code to another machine with an un-modified library) then...
2) You can yank the trim-string out of the Config Data Modify vi and hope that it does not have any undesirable side effects with regards to the other routines that use Config Data Modify (so far I have not found any in my limited testing)
or
3) You can modify the string pre-processing vi, Remove Unprintable Chars, to add the space character to the list of characters that get swapped out automatically.
Note that both option #1 (as suggested above) and option #3 will produce an INI file data entry that looks like key="\20Hello\20World\20" while option #2 produces an entry that looks like key=" Hello World "
The attached PDF contains screenshots of all this.
Attachments:
Binder1.pdf 2507 KBHi Warren,
there's a 4th option:
Simply set the "write raw string" input of the write key function to TRUE
This option only appears when a string is wired to that function!
Just re-checked:
I think it's a limitation of the config file format. It's text based and (leading) spaces in the value are "overseen" as whitespaces. So your next option would be to use quotes around your string with spaces...
Message Edited by GerdW on 05-02-2009 08:32 PM
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Creating Mime-messages from String data
How do I save string data of email received from Outlook Express by calling BufferedReader's readLine() method over a socket a connection so that it can be converted into MimeMessage.
Sorry but i didn't read your code snipppet so well.
So you have a Vector v wicht contains the client part of the dialog.
This is a typical conversation: ( you don't use EHLO or HELO handshake!? it's considered rude not to introduce yourself :) )
EHLO CLIENTNAME
250
MAIL FROM:<[email protected]>
250 MAIL FROM:<[email protected]> OK
RCPT TO:<[email protected]>
250 RCPT TO:<[email protected]> OK
DATA
354 Start mail input; end with <CRLF>.<CRLF>
Message-ID: <24569170.1093420595394.JavaMail.cau@PTWPC019>
From: [email protected]
To: [email protected]
Subject: something
Mime-Version: 1.0
Content-Type: multipart/mixed;
boundary="----=_Part_0_17459938.1093420595224"
------=_Part_0_17459938.1093420595224
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
TEXT CONTENTS
------=_Part_0_17459938.1093420595224
Content-Type: text/html; charset=US-ASCII
Content-Transfer-Encoding: 7bit
<b>HTML CONTENTS<b>
------=_Part_0_17459938.1093420595224--
250 <412ADBC5000000B5> Mail accepted
QUIT
221 ontrob1.bmsg.nl QUIT
The results in the vector from DATA to the ending dot . should be the part of your constructor string;
Use this constructor
MimeMessage mm = new MimeMessage(null, ByteArrayInputStream( yourstring.getBytes() ) ) ;at this moment you can deconstruct the mime further.
maybe this code will help:
you should call the dumpPart method like this dumpPart( mm );
Store store;
Folder folder;
static boolean verbose = false;
static boolean debug = false;
static boolean showStructure = true;
private static void dumpPart(Part part) throws Exception {
if (part instanceof Message)
dumpEnvelope((Message) part);
/** //Dump input stream ..
InputStream is = part.getInputStream();
// If "is" is not already buffered, wrap a BufferedInputStream
// around it.
if (!(is instanceof BufferedInputStream))
is = new BufferedInputStream(is);
int c;
while ((c = is.read()) != -1)
System.err.write(c);
pr("CONTENT-TYPE: " + part.getContentType());
* Using isMimeType to determine the content type avoids
* fetching the actual content data until we need it.
if (part.isMimeType("text/plain")) {
pr("This is plain text");
pr("---------------------------");
if (!showStructure)
System.out.println((String) part.getContent());
} else if (part.isMimeType("multipart/*")) {
pr("This is a Multipart");
pr("---------------------------");
Multipart mp = (Multipart) part.getContent();
level++;
int count = mp.getCount();
for (int i = 0; i < count; i++)
dumpPart(mp.getBodyPart(i));
level--;
} else if (part.isMimeType("message/rfc822")) {
pr("This is a Nested Message");
pr("---------------------------");
level++;
dumpPart((Part) part.getContent());
level--;
} else if (!showStructure) {
* If we actually want to see the data, and it?s not a
* MIME type we know, fetch it and check its Java type.
Object o = part.getContent();
if (o instanceof String) {
pr("This is a string");
pr("---------------------------");
System.out.println((String) o);
} else if (o instanceof InputStream) {
System.err.println("HELLO CAU 1111");
pr("This is just an input stream");
pr("---------------------------");
InputStream is2 = (InputStream) o;
int c2;
while ((c2= is2.read()) != -1)
System.out.write(c2);
System.err.println("\nHELLO CAU");
} else {
pr("This is an unknown type");
pr("---------------------------");
pr(o.toString());
} else {
pr("This is an unknown type");
pr("---------------------------");
private static void dumpEnvelope(Message msg) throws Exception {
pr("This is the message envelope");
pr("---------------------------");
Address[] a;
// FROM
if ((a = msg.getFrom()) != null) {
for (int j = 0; j < a.length; j++)
pr("FROM: " + a[j].toString());
//TO
if ((a = msg.getRecipients(Message.RecipientType.TO)) != null) {
for (int j = 0; j < a.length; j++)
pr("TO: " + a[j].toString());
// SUBJECT
pr("SUBJECT: " + msg.getSubject());
// DATE
Date d = msg.getSentDate();
pr("SendDate: " + (d != null ? d.toString() : "UNKNOWN"));
//FLAGS
Flags flags = msg.getFlags();
StringBuffer sb = new StringBuffer();
Flags.Flag[] sf = flags.getSystemFlags(); // get the system flags
boolean first = true;
for (int i = 0; i < sf.length; i++) {
String s;
Flags.Flag f = sf;
if (f == Flags.Flag.ANSWERED)
s = "\\Answered";
else if (f == Flags.Flag.DELETED)
s = "\\Deleted";
else if (f == Flags.Flag.DRAFT)
s = "\\Draft";
else if (f == Flags.Flag.FLAGGED)
s = "\\Flagged";
else if (f == Flags.Flag.RECENT)
s = "\\Recent";
else if (f == Flags.Flag.SEEN)
s = "\\Seen";
else
continue; // skip it
if (first)
first = false;
else
sb.append(' ');
sb.append(s);
String[] uf = flags.getUserFlags(); // get user-flag strings
for (int i = 0; i < uf.length; i++) {
if (first)
first = false;
else
sb.append(' ');
sb.append(uf[i]);
pr("FLAGS: " + sb.toString());
// X-MAILER
String[] hdrs = msg.getHeader("X-Mailer");
if (hdrs != null)
pr("X-Mailer: " + hdrs[0]);
else
pr("X-Mailer NOT available");
static String indentStr = " ";
static int level = 0;
* Print a, possibly indented, string.
public static void pr(String s) {
if (showStructure)
System.out.print(indentStr.substring(0, level * 2));
System.out.println(s);
Tricae -
What really happens does it converts ascii to unicode
hi
java understands unicode that is 2 byte encoding and office 97 doesn't understand unicode. then how data stored in access 97 (ascii 1 byte encoding) are correctly interpreted by java if i insert '\u0900' a '?' get inserted in to access table
can someone tell me.I would expect that your String data would be converted to bytes using the default encoding on your system, exactly as if you had used "byte[] b = yourString.toBytes()". And since \u0900 is described as "Unassigned" in Unicode, it's most likely to be translated to '?'.
-
How to migrate from ascii to unicode (MaxDB 7.5)? loadercli: ERR -25347
Hi,
I use MaxDB 7.5.00.26. (Ok, I know that I should switch to 7.6, however, it is not possilble for some customer restriction for now, but should be possible quite soon).
We'd like to migrate a db from ascii to unicode. Based on the infos in the thread "Error at copying database using dumps via loadercli: error -25364" I tried the following:
Export sourcedb
1. Export catalog and data
C:\> loadercli -d db_asc -u dba,dba
loadercli> export db catalog outstream file 'C:\tmp1\20080702a_dbAsc.catalog' ddl
OK
loadercli> export db data outstream file 'C:\tmp1\20080702b_dbAsc.data' pages
OK
loadercli> exit
Import targetdb
1. Create a new empty DB with '_UNICODE=yes'
2. Set 'columncompression' to 'no'
C:\> dbmcli -d db_uni -u dba,dba param_directput columncompression no
ERR
-24979,ERR_XPNOTFOUND: parameter not found
Couldn't find this parameter e.g. in dbmgui (parameters general, extended and support)
3. Import catalog and data
C:\> loadercli -d db_uni -u dba,dba
loadercli> import db catalog instream file 'C:\tmp1\20080702a_dbAsc.catalog' ddl
OK
loadercli> import db data instream file 'C:\tmp1\20080702b_dbAsc.data' pages
ERR -25347
Encoding type of source and target database do not match: source = ASCII, target
= UNICODE.
loadercli> exit
What is wrong? Is a migration from ascii to unicode to be done somehow else?
Can I migrate a db from 7.5.00.26 to 7.6.03.15 in the same way or should it be done in another way.
It would be greate if you point me to a post etc. where these two migrations are explained in detail.
Thanks in advance - kind regards
MichaelHi,
I can neither find "USEUNICODECOLUMNCOMPRESSION" nor "COLUMNCOMPRESSION". Could it be that there do exist from MaxDB version 7.6 on and not in 7.5?
Kind regards,
Michael
The complete parameter list (created by "dbmcli -d db_uni -u dbm,dbm param_directgetall > maxdb_params.txt") is:
OK
KERNELVERSION KERNEL 7.5.0 BUILD 026-123-094-430
INSTANCE_TYPE OLTP
MCOD NO
RESTART_SHUTDOWN MANUAL
_SERVERDB_FOR_SAP YES
_UNICODE YES
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 2
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 LOG_001
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 131072
DATA_VOLUME_NAME_0001 DAT_0001
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0001 262144
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 14
_MULT_IO_BLOCK_CNT 4
_DELAY_LOGWRITER 0
LOG_IO_QUEUE 50
_RESTART_TIME 600
MAXCPU 1
MAXUSERTASKS 50
_TRANS_RGNS 8
_TAB_RGNS 8
_OMS_REGIONS 0
_OMS_RGNS 25
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
_ROW_RGNS 8
_MIN_SERVER_DESC 16
MAXSERVERTASKS 21
_MAXTRANS 292
MAXLOCKS 2920
_LOCK_SUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
_USE_ASYNC_IO YES
_IOPROCS_PER_DEV 1
_IOPROCS_FOR_PRIO 1
_USE_IOPROCS_ONLY NO
_IOPROCS_SWITCH 2
LRU_FOR_SCAN NO
_PAGE_SIZE 8192
_PACKET_SIZE 36864
_MINREPLY_SIZE 4096
_MBLOCK_DATA_SIZE 32768
_MBLOCK_QUAL_SIZE 16384
_MBLOCK_STACK_SIZE 16384
_MBLOCK_STRAT_SIZE 8192
_WORKSTACK_SIZE 8192
_WORKDATA_SIZE 8192
_CAT_CACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 3264
INIT_ALLOCATORSIZE 221184
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
_TASKCLUSTER_01 tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
_TASKCLUSTER_02 ti,100*dw;30000*us;
_TASKCLUSTER_03 compress
_MP_RGN_QUEUE YES
_MP_RGN_DIRTY_READ NO
_MP_RGN_BUSY_WAIT NO
_MP_DISP_LOOPS 1
_MP_DISP_PRIO NO
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 0
_MP_RGN_PRIO NO
MAXRGN_REQUEST 300
_PRIO_BASE_U2U 100
_PRIO_BASE_IOC 80
_PRIO_BASE_RAV 80
_PRIO_BASE_REX 40
_PRIO_BASE_COM 10
_PRIO_FACTOR 80
_DELAY_COMMIT NO
_SVP_1_CONV_FLUSH NO
_MAXGARBAGE_COLL 0
_MAXTASK_STACK 1024
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
_DW_IO_AREA_SIZE 50
_DW_IO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
_FBM_LOW_IO_RATE 10
CACHE_SIZE 10000
_DW_LRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
_DATA_CACHE_RGNS 8
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 11
SEQUENCE_CACHE 1
_IDXFILE_LIST_SIZE 2048
_SERVER_DESC_CACHE 74
_SERVER_CMD_CACHE 22
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
_READAHEAD_BLOBS 25
RUNDIRECTORY E:\_mp\u_v_dbs\EVERW_T3
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 0
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 653
EXTERNAL_DUMP_REQUEST NO
_AK_DUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
_UTILITY_PROTFILE dbm.utl
UTILITY_PROTSIZE 100
_BACKUP_HISTFILE dbm.knl
_BACKUP_MED_DEF dbm.mdf
_MAX_MESSAGE_FILES 0
_EVENT_ALIVE_CYCLE 0
_SHAREDDYNDATA 10280
_SHAREDDYNPOOL 3658
USE_MEM_ENHANCE NO
MEM_ENHANCE_LIMIT 0
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-07-02 21:10:19
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH E:\_mp\u_v_dbs\EVERW_T3\DIAGHISTORY
_DIAG_SEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 43690
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_JOIN_OUTER YES
JOIN_OPERATOR_IMPLEMENTATION YES
JOIN_TABLEBUFFER 128
OPTIMIZE_FETCH_REVERSE YES
SET_VOLUME_LOCK YES
SHAREDSQL NO
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FORBID_LOAD_BALANCING NO -
Xy graph in GUI. Show string data related to a X,Y point
It is possible to show string data, while hovering the mouse or clicking a cursor, attached/related to a X,Y point in a XY graph??
Kind regardsI assume you want to let the x-y-value pair hover over the graph at mouse position.Perhaps this will help you:
http://forums.ni.com/t5/LabVIEW/Text-overlay-annotation-onto-an-intensity-graph/m-p/883438/highlight...
Another option might be just to use the cursor functionalities of XY graphs, although they don't hover at mouse position.
Other than that I had a similar request on Image Displays with IMAQ (Vision Development Module) where it turned out that using a simple string control from the classic palette that you move around according to mouse position is an efficient way to let information hover wherever you want. Check it out:
http://forums.ni.com/t5/LabVIEW/Why-do-overlays-take-so-much-longer-on-single-precision/m-p/2376338#... -
I have some code that sends strings through a socket to a C++ program on another machine.
The test strings I have been using are three characters long (i.e. 6 bytes). The buffer size I have put on the socket is 72 bytes, so I should have no problem writing these strings.
The first three strings always arrive at the other program fine, BUT after that, only a substring of each string seems to get through. I am absolutely certain that the strings are complete before I send them, because I print it to the screen at the same time.
I have tried change the style of output stream I use (I have used DataOutputStream, BufferedOutputStream, BufferedWriter, StringWriter, and PrintWriter. It still happens.
Can anybody tell me why this might happen?
- AdamWithout more info it is hard guessing. If you want
reassurance that it should work, then yes, it should
work.-(Well that's kinda what I'm looking for. I'm wondering if anybody knows of a reason why a C++ program (running in windows -- btw, not by my choice) would read the string differently than a java program?
For all the XxxWriter types you used, I hope you
declared the charset to be one that preserves the
2bytes/char you expect (UTF-8 and ISO8859-1 aren't).I haven't modified that from whatever default is set. This may be the probelm, I will look into that, thanks.
You certainly did not use the BufferedOutputStream
without somehow encoding the characters into bytes,
so how did you do? I'm not sure (it was last week that I tried it) but I think the BufferedOutputStream has a method writeBytes(String) that I used.
For DataOutputStream, I hope you
used writeChar, which guarantees that 2bytes/char are
send. Nope. I mostly tried to stick with methods that accept a String, so that I was sure that it was sent out in the right format to be read back in as a String. I wasn't sure what the actually format of a String is, when passed through a socket, specifically, if there is a terminating character (I know C uses a \0 to end the string). Is there any additionl info need to write a string to a socket?
If you did all this, ... well I would not
fiddle with the socket's buffer size.Sorry, but I may have to maintain a low buffer size, because these strings are not the only things being sent over this socket. Do you think the buffer size is affecting the problem. I wondered, but the buffer size seems more than large enough to send 3 character strings.
That's all that comes to mind. Did you try netcat
(aka nc) as a server to make sure the problem is not
at the receiving end?I haven't tried this yet, but I will if I can't figure this out. Unfortunately, I'm NOT the author of the code that recieves the data, and the guy who is has simply assumed that the problem is my fault (although he's never actually tested his code with the string data being sent before), and is not interested in checking to make sure he's done it right. I tried looking over his code, but he's got the input routine burried in an #include file that I don't have.
Thanks for the input, Harald. There are a few things there that I will look into.
- Adam -
Hi,
Is there a cold fusion function that checks the complete date (mm/dd/yyyy) against another compelete date (mm/dd/yyyy)?
I have used the DateCompare but it only checks the month, day or year depending on the precision being used.
Thanks,
MikeI think you might need to read the docs a bit more closely. From the docs for dateCompare():
datePart
Optional. String. Precision of the comparison.
s Precise to the second (default)
n Precise to the minute
h Precise to the hour
d Precise to the day
m Precise to the month
yyyy Precise to the year
Indeed, even by default its behaviour is not what you suggest it is.
Adam
Maybe you are looking for
-
Is it possible to connect 5th Gen Ipod Touch to HDTV to view HD content from Ipod Touch?
Can this be done? What cables would be needed? Thank You!
-
QuickTime doesn't support mpeg4
My quicktime player tells me that it doesn't support mpeg4 when I try to play an mpeg4 on my home xp system. What should I do.
-
Link of products to business transactions in Interaction center
Good day. Can anyone supply me with a BAPI/ function whereby I can link a product to a business transaction in Interaction Center. Regards Quintus
-
Satellite Pro A300 - Freezes in normal mode
Before I proceed, I wanna say that I'm no expert at computers. All these technological terms doesn't really make sense to me. Hi. I am currently typing this on Safe Mode as if I start Windows normally, the laptop would simply freeze and will not resp
-
Hello all, In a project with process the report SDBILLDL via SUBMIT. As parameter for billing date we transfer the current date. For deliveries which have no posted goods issue and a proposed delivery date in the past the billing picks the proposed d