A problem about Chinese encoding

When i access portal by SSL,I find one or two problems about encoding.
reshow these problems:
1. Locale be set to Chinese(zh_CN)
2. access portal by SSL
3. Administration/create portlet
Then the popup window's title will show a bad encoding.
otherwise,if i search chinese, then only have bad encoding shown in the searchbox and i can't get anything.
MY running enviroment:
windows 2003
plumtree foundation v6.0
plumtree publisher v6.2
tomcat 5.0.58 with jre_1.4.2.08
MSSql 2000 sp4
and I have tried other BEA ALUI newer version,I can get these problems.
I'm appreciated for any help.

ANSI 0xFF is NBSP. NBSP in Unicode is 0x00A0... so neither 0x00FF (which is �) nor 0xF5F8 (which is in the "private use" range) is correct. Something really weird is going on.We are talking about CP1252 and in CP1252 0xFF is �
(see http://czyborra.com/charsets/iso8859.html for example) and thus is \u00FF correct.
btw. I would never use windows notepad as reference application.

Similar Messages

  • Problem about ZHT32EUC encoding database and JDBC clients

    We have encountered a problem with the following configuration:
    Oracle Database Server
    AIX
    Oracle 8.1.7 - ZHT32EUC character encoding
    Clients are Java Applet/Servlet/JSP/EJB, as well as an other
    ZHT32EUC database for some data import/export (abbr as OLD).
    the problem we have is, no matter what we had set (the NLS_LANG
    environment variable) on the Java clients, we cannot read the
    ResultSet.getString() with the DB link and imported data from
    the OLD correctly. However, with the data we inserted with our
    own Java clients, it can be read flawlessly with JDBC calls.
    Any suggestions will be appreciated.
    Cheng Chun Kit
    Software Specialist
    Automated Systems (HK) Ltd

    Hard to tell, as you are not providing a lot of details. What is
    the OS that your other database is on. What are you actually
    setting NLS_LANG to? Are you using Oracle's JDBC driver or
    someone elses? Please describe? Which driver, OCI or thin
    driver? I would advise you to use the dump command to look at
    the data. I suspect you have not loaded the data properly to the
    database.

  • How edit Firefox web site preview font type? (About chinese problem)

    Hello guy ,
    I have problem , this chinese font not of good and preview have problem , can't show some font .
    how setting size font or use other chinese font type to preview web site ?
    Thank !
    Last edited by wing (2014-11-16 15:27:05)

    Yes , i read and  saw the wiki https://wiki.archlinux.org/index.php/Font_configuration , or https://wiki.archlinux.org/index.php/Firefox , but i am not sure which page is setting my problem .
    This is LinuxMint use firefox to preview web site
    http://i60.tinypic.com/2dbqmc2.png
    http://i58.tinypic.com/2mwy16f.png
    And This is ArchLinux to preview firefox web site
    http://i60.tinypic.com/2z3vznq.png
    http://i59.tinypic.com/21j7mgn.png
    i don't why some font is very big , some font is small and can't to show the font
    -- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --
    Last edited by wing (2014-11-17 07:59:44)

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • A problem about Ms Sql 7 - Oracle 8.1.5 ?

    hi
    I used OMWB to migrate MS sql 7 to Oracle 8.1.5, both of our env are on chinese character set,it worked well. Thank OMWB work team. The whole process of migration did't meet any problem. But I met a problem about character set,I don't see right content on CLOB column,all data in CLOB columns are like the following something :
    0F5C3A53A44E1A904575BE8FBF4F776302303052FD56388D01303100380006529F943052B065164E
    I maped text datatype of SQL to CLOB datatype,I can see the right data directly in SQL by select command,but I can't see the right data in SQL*PLUS.
    How can I do? Migration workbanch don't support the multi-bit char set such as chinese?
    null

    I already knew the solution of the problem,but I met another problem.
    I checked my sql 7 database again, I found large columns are ntext datatype. I changed the ntext to text and migrated data again. In SQL*PLUS ,I found the data of clob columns change to '????.?-?'. I know this is caused by NLS_LANG, my NLS_LANG=simplified chinese_china.zhs16gbk. How do i set my nls_lang to get the right data?
    null

  • Problem parsing chinese, arabic characters

    My xml file contains the following lines.
    <localedata locale="ja">
    <title>¥¢¥¦¥È¥é¥¤¥ó</title>
    <description></description>
    </localedata>
    <localedata locale="zh">
    <title>´ó¸Ù</title>
    <description></description>
    </localedata>
    <localedata locale="zh_TW">
    <title>݆Àª</title>
    <description></description>
    </localedata>
    I am parsing the xml file and rewriting it to another file, following is the code for creating the Document class object
    public void intializeDocument(String file) throws Exception {
              InputSource is = new InputSource(new FileInputStream(file));
              is.setEncoding("UTF-8");
              doc = parserXML(is);
              mainNode = doc.getDocumentElement();
         public Document parserXML(InputSource file)
              throws SAXException, IOException, ParserConfigurationException {
              return DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(
                   file);
    But the output after doing some updates is as given below
    <localedata locale="ja">
    <title>??????</title>
    <description></description>
    </localedata>
    <localedata locale="zh">
    <title>??</title>
    <description></description>
    </localedata>
    <localedata locale="zh_TW">
    <title>??</title>
    <description></description>
    </localedata>
    Edited by: moiz134 on Jan 29, 2009 3:09 AM
    Please advice what should be done to resolve the issue.

    moiz134 wrote:
    Thank you for your reply, JoachimSauer. I have removed the setEncoding method but during output I set the encoding to UTF-8, which helps me to print the arabic, chinese characters It's still not necessary, if you use the XML API for writing your files.
    >
    TransformerFactory transfac = TransformerFactory.newInstance();
                   Transformer trans = transfac.newTransformer();
                   trans.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
                   trans.setOutputProperty(OutputKeys.INDENT, "yes");
                   StringWriter sw = new StringWriter();
                   StreamResult result = new StreamResult(sw);
                   DOMSource source = new DOMSource(p1.mainNode);
                   trans.transform(source, result);
                   String xmlString = sw.toString();Why would you write your XML to a String when you later want it in a file?
    That only costs memory and introduces another chance to get things wrong. Just use a StreamResult constructed from a File object.
    Is there a better way to identify the encoding type of the input xml file and pass it as parameter while outputting the file.There's no need to do that. Why would you want to?
    Edited by: moiz134 on Jan 29, 2009 6:11 AM
    My xml files first line is as given below
    <?xml version="1.0" encoding="UTF-8"?>
    I am using the below method to get the encoding type of the xml file but it always returns null as output, pleas advice
    InputSource is = new InputSource(new FileInputStream(file));
              System.out.println(is.getEncoding());I don't think that works. The encoding will only be checked when the parsers actually uses your InputSource and then it's only used internall.
    You don't need to know about the encoding, because the Parser will always give you correctly-decoded Java String objects (which have a defined encoding, which is unrelated to the encoding of the data in the original XML).
    My guess is still that you somehow use System.out.println() or a broken XML editor/viewer to verify the content of your XML.

  • The Problem about Monitoring Motion using PCI-7358 on LabVIEW Real Time Module

    Hello everyone,
    I have a problem about monitoring the position of an axis. First let me give some details about the motion controller system I’m using.
    I’m using PCI-7358 as controller and MID-7654 as servo driver, and I’m controlling a Maxon DC Brushed motor. I want to check the dynamic performance of the actuator system in a real time environment so I created a LabVIEW function and implemented it on LabVIEW Real Time module.
    My function loads a target position (Load Target Position.vi) and starts the motion. (Start.vi) then in a timed loop I read the instantaneous position using Read Position.vi. When I graphed the data taken from the Read Position.vi, I saw that same values are taken for 5 sequential loops. I checked the total time required by Read Position.vi to complete its task and it’s 0.1ms. I arranged the loop that acquires the data as to complete its one run in 1ms. But the data shows that 5 sequential loops read the same data?

    Read Position.flx can execute much faster than 5 ms but as it reads a register that is updated every 5 ms on the board, it reads the same value multiple times.
    To get around this problem there are two methods:
    Buffered High-Speed-Capturing (HSC)
    With buffered HSC the board stores a position value in it's onboard buffer each time a trigger occurrs on the axis' trigger input. HSC allows a trigger rate of about 2 kHz. That means, you can store a position value every 500 µs. Please refer to the HSC examples. You may have to look into the buffered breakpoint examples to learn how to use a buffer, as there doesn't seem to be a buffered HSC example available. Please note that you need an external trigger-signal (e. g. from a counter of a DAQ board). Please note that the amount of position data that you can acquire in a single shot is limited to about 16.000 values.
    Buffered position measurement with additional plugin-board
    If you don't have a device that allows you to generate a repetitive trigger signal as required in method 1.), you will have to use an additional board, e. g. a PCI-6601. This board provides four counter/timers. You could either use this board to generate the trigger signal or you could use it to do the position capturing itself. A PCI-6601 (or an M-Series board) provides can run a buffered position acquisition with a rate of several hundred kHz and with virtually no limitation to the amount of data to be stored. You could even route the encoder signals from your 7350 to the PCI-6601 by using an internal RTSI cable (no external wiring required).
    I hope this helps,
    Jochen Klier
    National Instruments

  • An old and difficult problem about "UnsatisfiedLinkError"

    Hi dear all,
    I have been struck with the problem about "UnsatisfiedLinkError". I have a c++ class HelloWorld with a method hello(), and I want to call it from within a java class. In fact, I have succeeded in calling it on the windows platform. But when I transfer it to linux, the error "UnsatisfiedLinkError" comes out. I have tried to take the measures as Forum has suggested, but it failed.
    The source code is very simple to demonstrate JNI.
    "HelloWorld.h"
    #ifndef INCLUDEDHELLOWORLD_H
    #define INCLUDEDHELLOWORLD_H
    class HelloWorld
    public:
    void hello();
    #endif
    "HelloWorld.cpp"
    #include <iostream>
    #include "HelloWorld.h"
    using namespace std;
    void HelloWorld::hello()
    cout << "Hello, World!" << endl;
    "JHelloWorld.java"
    public class JHelloWorld
    public native void hello();
    static
    System.loadLibrary("hellolib");
    public static void main(String[] argv)
    JHelloWorld hw = new JHelloWorld();
    hw.hello();
    "JHelloWorld.cpp"
    #include <iostream>
    #include <jni.h>
    #include "HelloWorld.h"
    #include "JHelloWorld.h"
    JNIEXPORT void JNICALL Java_JHelloWorld_hello (JNIEnv * env, jobject obj)
    HelloWorld hw;
    hw.hello();
    All the files are in the same directory and all the processes are under the dirctory:
    1. javac JHelloWorld.java
    2. javah -classpath . JHelloWorld
    3. g++ -c -I/usr/java/jdk1.3/include -I/usr/java/jdk1.3/include/linux JHelloWorld.cpp HelloWorld.cpp
    4. ld -shared -o hellolib.so *.o
    5. java -cp . -Djava.library.path=. JHelloWorld
    Exception in thread "main" java.lang.UnsatisfiedLinkError: no hellolib in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1349)
    at java.lang.Runtime.loadLibrary0(Runtime.java:749)
    at java.lang.System.loadLibrary(System.java:820)
    at JHelloWorld.<clinit>(JHelloWorld.java:7)
    Tried another measure:
    i) export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
    ii)java -cp . JHelloWorld
    The same error came out as above.
    I really don't know what is wrong with it.
    Would you like to help me as soon as possible?
    Thanks.
    Regards,
    Johnson

    Hi Fabio,
    Thanks a lot for your help.
    It is very kind of you.
    Regards,
    Johnson

  • A problem about calling Labview vi in VB

    Hi all:
    I meeting a problem about data transfer and parallel operation between VB and Labview.
    Actually, I want develop a VB program, in which, the Labview VI can be called and corresponding parameters can be transferred to Labview. and then, I also can operate my system by VB program at same time. something like parallel operation (VB and Labview program).
     But the question is :
    1.   If I use "Call" method of ActiveX in VB,  and the LabVIEW subvi is not stopped (for example, a loop structure), I can not do  parallel operation on VB program. The error message is "other application is busy" which is attached below. The sample codes is also attached.
    2.   I tried to use other methods like "OpenFrontPanel" and "Run", but I am not sure how to transfer the parameter??
    3.  Then I tried to use "SetControlValue" to set the parameters, but there is a error " := expected", which is very strange, because the statement  I wrote is followed with the help documents [ eg: VI.SetControlValue ("string", value)], why it is still need a "=" ??
    Does anybody know something about it? Thanks a lot
    Message Edited by hanwei on 11-07-2008 03:18 PM
    Attachments:
    vb_labview_error_message_1.JPG ‏14 KB
    VB_to_LV.zip ‏10 KB

    I sure hope OP has solved it by now.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • HT201210 hi everyone, i have a problem about my iphone 4S, doesn't work with wifi connection and bluetooth since upgrade to the IOS 7.0.3. Can anyone can help me tosolve this problem?????Thank's regards paulus

    hi everyone, i have a problem about my iphone 4S, doesn't work with wifi connection and bluetooth since upgrade to the IOS 7.0.3. Can anyone can help me tosolve this problem?????Thank's regards paulus

    Try the suggestions here to see if they resolve your problem:
    http://support.apple.com/kb/ts1559
    If these don't work you may have a hardware problem. Visit an Apple store for an evaluation or contact Apple Support.

  • Problem about Handling of Empty Files in File Adapter

    Hello everyone,
    NetWeaver 2004s --- XI
    In Sender i have a File Adapter.
    Now i meet a problem about Handling of Empty Files. When i send empty file, but don't cerate a leer message.
    I have seen following text in help document. But in adapter configuration i can not find the correspond parameter.
    can you give me some tips?
    Thx in advance
    best regards
    Yaning
    SAP Help Document über File Adapter
    +Handling of Empty Files
    Specify how empty files (length 0 bytes) are to be handled.
    &#9675;       Do Not Create Message
    No XI messages are created from empty files.
    The files are processed according to the selected Processing Mode.
    For example, if the processing mode is Delete, empty files are deleted in the source directory.
    &#9675;       Process Empty Files
    XI messages are created with an empty main payload.
    The files are processed according to the selected Processing Mode.
    &#9675;       Skip Empty Files
    No XI messages are created from empty files.
    Empty files are skipped and remain in the source directory.+
    Help Docu

    hi,
    it's available since Sp19 for XI 3.0
    and the corresponding SPS fpr XI 7.0
    http://help.sap.com/saphelp_nw04/helpdata/en/44/f565854b7341e6e10000000a1553f6/frameset.htm
    so probably you need to install the new SP
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Problem about sales order stock stock transfer and batch determination

    Hi, experts, I get a problem about sales order stock stock transfer and batch determination.The following is the current situation of my system:
      In OMCG I assigned search procedure ME0001 to both 311 and 311 E and ticked check batch. After that, I found  that if iI need to tranfer unrestricted-use material from storage location 1000 to 2000 with movement type 311, I just need to input * at the field batch, then the system will display all of the available batches. But for the transferring of sales order stock with movement type 311 E, after I input * at the batch field, no batch is displayed and there is also no message from the system.
      Can anybody help me? Is there anything else I need to do? Thanks very much.

    I think my question is not clear, actually I tried 562 E , 411 E and 413 already. all the transaction looking for the sales order but unfortunatly the sales order is deleted  from SAP.

  • Hello apple I have the problem with my iPhone and my friends have this problem too. My iPhone have the problem about calling and answer the call. When I use my iPhone to call I can't hear anything from my iPhone but the person that I call can answer it bu

    Hello apple
    I have the problem with my iPhone and my friends have this problem too.
    My iPhone have the problem about calling and answer the call. When I use my iPhone to call I can't hear anything from my iPhone but the person that I call can answer it but when answer both of us can't hear anything and when I put my iPhone to my face the screen is still on and when I quit the phone application and open it again it will automatic call my recent call. And when my friends call me my iPhone didn't show anything even the missed call I'm only know that I missed the call from messages from carrier. Please check these problem I restored my iPhone for 4 time now in this week. I lived in Hatyai, Songkhla,Thailand and many people in my city have this problem.
    Who have this problem??

    Apple isnt here. this is a user based forum for technical questions. The solution is to restart, reset, and restore as new which is in the manual after that get it replaced for hard ware failure. if your within your one year warranty its replaced if it is out of the warranty then it is 199$

  • Problem about updating to Mac OS X 10.5.3

    Hi there,
    I just bought my Macbook two months ago. I encounted a problem about updating it to the latest version of MAC OS X. The system reminded me to restart, I proceed. Everythig looked just fine until a poping up window shown "Mac OS X 10.5.3 pacakage can not be validated". Dose anyone know what did that mean? How can I update my little mac?

    Hi Hummer,
    First, welcome to Apple discussions.
    To answer your question, your computer checks all downloaded updates before installing them. It does this to make sure it's got the entire file and that the update will install correctly. The fact that validation fails on your machine probably means the file did not download correctly.
    To install 10.5.3, simply go to this website, download the "10.5.3 combo update" and then run it on your Mac: http://www.apple.com/support/downloads/

  • Hardware diagnostic shows me a problem about charging battery. What about it ?

    Hi.
    I bought a MacBook Pro retina 13', mid 14. on last december, and after almost a month I did the hardware test of apple as the Technic support says, and this hardware test shows me a problem about charging battery, so, I wanna know what about it, is it normal ? or is it a battery problem or is it a problem about the Current adaptador ? Thank your for your answers.

    Could be battery, could be adaptor, could be the computer itself.  You will have to take it in for testing.  Fortunately, you are still under warranty.

Maybe you are looking for