A question about character addition.

Hi,
i have a question about character. seet the following code.
DATA: lv_entryid TYPE char3.
lv_entryid = '001'.
WRITE : / lv_entryid.
lv_entryid = lv_entryid + 1.
WRITE : / lv_entryid.
the  answer is :
001
2
but the result i expect is
001
002.
how to process, Could you please help me?
Thanks in advance.

DATA: lv_entryid TYPE char3.
lv_entryid = '001'.
WRITE : / lv_entryid.
lv_entryid = lv_entryid + 1.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
     EXPORTING
          input  = lv_entryid
     IMPORTING
          output = lv_entryid.
WRITE : / lv_entryid.
Try this code and see
Regards
Gopi

Similar Messages

  • Question about character encoding

    Hey everyone,
    I've been trying to read about character encodings and how Java IO uses them, however, there is something I still don't understand:
    I am trying to read an HTML file encoded in UTF-8 and which contain some Arabic characters.
    I used this code to save it on a file:
    URL u = new URL("http://www.fatafeat.com");    // some Arabic website
    URLConnection connection = u.openConnection();
    BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
    OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("datat.txt"));
    int c;
    while ((c = reader.read())!=-1)
    writer.write(c);However, the output on the file is always "??????"
    I used another version of the code
    URL u = new URL("http://www.fatafeat.com");     //some Arabic website
    URLConnection connection = u.openConnection();
    InputStream reader = connection.getInputStream();
    FileOutputStream writer = new FileOutputStream("datat.txt");
    int c;
    while ((c = reader.read()) != -1)
    writer.write(c);And I get on the file something like this " ÇáæÍíÏÉ áÝä "
    I tried to open the file from a browser using a UTF-8 encoding, but still the same display. What am I missing here? Thanks

    Upon further investigation, my initial hunch appears to be correct. The page in question is not correctly encoded UTF-8, so the Java decoder fails. By default, the java decoder fails silently and replaces malformed input with the character "\uFFFD". If you want more control over the decoding process, you need to read the data as bytes and use the CharsetDecoder class to convert to characters. Here is small example to illustrate:
    import java.io.ByteArrayOutputStream;
    import java.io.CharArrayWriter;
    import java.io.IOException;
    import java.io.InputStream;
    import java.net.URL;
    import java.nio.ByteBuffer;
    import java.nio.CharBuffer;
    import java.nio.charset.Charset;
    import java.nio.charset.CharsetDecoder;
    import java.nio.charset.CoderResult;
    import java.nio.charset.CodingErrorAction;
    public class FatafeatToy3
        private static final Charset UTF8 = Charset.forName("UTF-8");
        private static final Charset DEFAULT_DECODER_CHARSET = UTF8;
        private final CharsetDecoder decoder;
        public FatafeatToy3(final Charset cs)
            this.decoder = cs.newDecoder();
            this.decoder.onMalformedInput(CodingErrorAction.REPORT);
            this.decoder.onUnmappableCharacter(CodingErrorAction.REPORT);
        public FatafeatToy3()
            this(DEFAULT_DECODER_CHARSET);
        public ByteBuffer pageSlurp (URL url) throws IOException
            ByteArrayOutputStream pageBytes = new ByteArrayOutputStream ();
            InputStream is = url.openStream();
            int ch;
            while ((ch = is.read()) != -1)
                pageBytes.write(ch);
            is.close();
            return ByteBuffer.wrap(pageBytes.toByteArray());
        public CoderResult decodeSome(ByteBuffer in, CharBuffer out)
            decoder.reset();
            CoderResult result = decoder.decode(in, out, true);
            if (result.isMalformed())
                System.err.printf("Malformed input detected at pos 0x%x%n", in.position());
            else if (result.isUnmappable())
                System.err.printf("Unmappable input detected at pos 0x%x%n", in.position());
            else if (result.isUnderflow())
                result = decoder.flush(out);
            return result;
        public static void main(String[] args) throws Exception
            FatafeatToy3 ft = new FatafeatToy3();
            ByteBuffer in = ft.pageSlurp(new URL("http://www.fatafeat.com"));
            System.out.printf("Page slurped contains %d bytes%n", in.capacity());
            CharBuffer out = CharBuffer.allocate(1); // one character at a time
            CharArrayWriter pageChars = new CharArrayWriter();
            CoderResult result = CoderResult.UNDERFLOW;
            while ( (! result.isError()) && in.remaining() > 0)
                result = ft.decodeSome(in, out);
                if (result.equals(CoderResult.OVERFLOW))
                    out.flip();
                    pageChars.append(out);
                    out.clear();
    }

  • Question about OTL additional hour pay if employee didn't submit meal break

    I have a question
    Does OTL capable of adding the standard rule where the employer is required to pay an employee 1 hour of additional pay if they did not have their 30 minute meal period after working 6 hours? In those rare cases where employees are not doing what they are supposed to do, it should be an automatic process.

    Dharma,
    I guess you are talking about California Meal penalty rule. Well, it is not supported in OTL. On the other hand, Oracle has the hour deduction policy setup which actually deducts hours for a given duration. I haven't tested this, but just try using that feature with a negative deduction.
    --Shiv                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Question about 25 additional points

    Hi all,
      I hope this is the right place for this question...
    Today I've found a nice gift from SDN: 25 additional points with the comment: "Joined the SDN World"
    I'm wondering what I've done special in these days (since I joined SDN some years ago)...
    Maybe it's a mistake...
    Did it happened to anyone else ?
    Have a nice day,
    Manuel

    hi Manuel,
    Its not for joining SDN but joining SDN World.
    Check this out.
    http://sdn.idizaai.be/sdn_world/sdn_world.html
    /people/eddy.declercq/blog/2006/05/24/we-are-the-world
    Good day
    ~Ak

  • W530, questions about adding additional RAM

    My current setup came with the factory optional 2x 8gb Samsung DDR3 1600mhz RAM. I recently purchased gskill ripjaws 8gb stick ( 1.35V model). My computer recognizes the ram is installed however mentions that the new ram is unusable.  I have the i7-3840QM 3rd Gen processor.
    Is the lower voltage causing the ram the culprit? Where else should i begin looking to allow my machine to be fully capable of using the full 24gb now installed?
    -Thanks
    -Concerned
    W530 i7-3840Qm. Quad K1000M. 16GB RAM. 16GB mSSD(expresscache). 500gb HDD

    I never buy memory that isn't guaranteed compatible with the machine by the vendor, which is why I always buy memory from Crucial.
    I also don't buy single-sticks to upgrade the total memory, but rather buy them in pairs (i.e. "kits") to guarantee dual-mode performance with identical matching size sticks in each pair of DIMM sockets.
    The Crucial site shows all 1.35v memory for the W530, so that's fine.  Memory latencies available from CL=9 to CL=11.
    Of course mixing latency memory speeds will just force the machine to run at the SLOWEST speed which seems undesirable, which again is another reason you should always match new memory latency with your older memory latency, or instead opt to upgrade all of your memory to newer faster "kits" of guaranteed matching identical performance pairs of cards.
    The Crucial site shows both PC3-12800 and PC3-14900 memory as compatible with the quad-core W530, ranging from CL=9 to CL=10 to CL=11.  So you have your choice, and price range.
    If your new single 8GB stick of memory has different latency characteristics from your existing 2x8GB factory memory, I guess the results should really just be that things run at the slower speed.  But to actually not even see the new single 8GB stick, well I don't know.
    Can you return the new memory and buy a replacement "kit" of 2 sticks (either 2x4GB or go with 2x8GB) which match the performance specs of your existing memory?

  • Question About Adding Ram from other laptop to T-series Thinkpad

    Hi,
    I have a question about adding additional RAM. Right now I have a 4gb ram stick in the t510. I have an acer laptop which I do not need and there is a 4GB ram stick onboard (maybe two 2gb ram sticks) there in that older laptop. Would it be possible to add one stick of 2gb ram (or 4gb i need to check if it is 2 or 1) from the acer to the lenovo thinkpad making the total ram in the thinkpad over 4gb????
    Thanks in advance

    Hi Richk,
    Yes, I am using a 64-bit operating system. I am running on windows 7. And as for reported incompatibilities....technically taking a RAM from another laptop and placing it in the laptop should be the same as purchasing a RAM card from ebay or something and putting into the laptop right?

  • Have questions about your Creative Cloud or Subscription Membership?

    You can find answers to several questions regarding membership to our subscription services.  Please see Membership troubleshooting | Creative Cloud - http://helpx.adobe.com/x-productkb/policy-pricing/membership-subscription-troubleshooting- creative-cloud.html for additional information.  You can find information on such topics as:
    I need help completeing my new purchase or upgrade.
    I want to change the credit card on my account.
    I have a question about my membership price or statement charges.
    I want to change my membership: upgrade, renew, or restart.
    I want to cancel my membership.
    How do I access my account information or change update notifications?

    Branching to new discussion.
    Christym16625842 you are welcome to utilize the process listed in Creative Cloud Help | Install, update, or uninstall apps to install and evaluate the applications included with a Creative Cloud Membership.  The software is fully supported on recent Mac computers.  You can find the system requirements for the Creative Cloud at System requirements | Creative Cloud.

  • A lot of questions about my MacBook Air

    I am really new to re-using Apple computers.The last time I used an Apple computer was back in 1987 when the school and my family had Apple IIGS computers. I have been using PC's which reqiure Microsoft. I a lot of have questions (10 questions) about my MacBook Air and I hope you good people can and will help me.
    Product: MacBook Air
    Operating System: Mac OS X Version 10.7.4
    1) I Downloaded MacKeeper because I was fooled. I had a bad feeling just before I Downloaded it and I should have listened to my heart. However, I didn't buy it or fully Install it. It was like a test run and then they wanted me to pay almost $100 for it. Thankfully, I didn't because I read it is Malware. I spoke with an Apple Tech at Apple Care and he helped me get rid of it (or so we think). I don't see it anymore on my computer. I read it can slow down your computer. How can you tell if it's really off of the computer?
    2) When I open "Finder" and I see that there are people Sharing my computer with me. I went into AirDrop and it reads, "Other people can see your Mac as (my name) MacBook Air when their computer is nearby." I bought a HotSpot and while it's turned on and I selected it as my WI-FI connection I thought it would  get rid of these people, protect what I type, me, my items, computer, etc. But it didn't.  
    I didn't know that I have to buy a exteral CD and/or DVD Player in order to connect to the brand new Modem and Router in one by NetGear. I am so used to PCs and the CD/DVD Players being built inside.
    The people at Apple Store told me that there is an internal modem inside, but I don't know how to find it and what to do then.  Should I use a Firewall?, An AntiVirus, AntiMalware, AntiSpyware, etc. Apple Care tech told me I don't need to get an AntiVirus.
    3) Is there a new kind of Wireless Modem and Router that doesn't require a CD-ROM?
    4) When I travel or fly and I am not close to home I was told by Best Buy and Sprint that I had to buy a mobile HotSpot to use the computer (WI-FI) safely. As I typed, I have one. But it's pretty expensive and only gives me 1 hour and 15 minutes per day to Stream. What can I do to use this computer safely Online when I am out of range from a Modem and Router? What do people do when they travel on airplanes?  
    5) This compter won't let me use "Raid." I think you have to have a newer version. I hard about Raid on the radio from Leo (can't recall his last name) who's a Tech expert.
    6) Should I buy a ZipDrive? Apple Store Tech told me that I didn't need a ZipDrive. I just remember the episode of HBO's "Sex and The City" when Carrie looses everything because her copy crashed. Now, of course, I know that's a fictional show, but with PC's and Microsoft I have lost everything when it crashed, frooze up, etc. I know there's iClouds. I heard about Carbonite, but I have read the Pros and Cons about it. Mostly they are Cons about it. I just don't want to do anything wrong and mess up this computer.
    7) Should I buy a new Printer/Copier/Scanner because mine is an HP. It's not new, but it works. I even have a CD-ROM for Macs. What about the new product called, "Neat"?
    8) Is there a special product that I should buy to do Online Banking and/or other important stuff?
    9) I saw and read about iWork in the Apps Store and it sounds cool. I still have alot of friends and colleagues who still use Microsoft. Is iWork good to use? Should I Download it from the Apple Apps Store or can a buy it at Apple? Is there another Word Processing Program that is great and user friendly and will work with Macs and PCs?
    10) Should I Update the OS with OS X Mountian Lion Pro from the Apple Apps Store or buy it at Apple Store?
    In advance, I wish to thank you in this Apple Support Communites for your help.  Have a safe and happy holiday weekend!

    1) Here are instructions for removing MacKeeper. Since it mostly consists of manually looking for folders and specific files, if you follow the instructions you either fail to find what you are told to find (because your AppleCare guide gave you complete instructions which you followed) or you'll find some additional files that need to be replaced.
    2 & 3) I assume you are looking at the sidebar of a Finder window and seeing Shared and computers under it. Those are computers that you can potentionally share. To do so you'd need an account on their computer and a password. They are not sharing your computer.
    AirDrop allows you to create an adhoc network for filesharing and it only functions when you have selected the AirDop item in the SideBar. Actually doing that merely announces to computers in the same network node that your computer is available for a file to be sent to. Even then you have to explicitly allow the file to be downloaded to your computer. Similarly you'd be able to see other computers with AirDrop selected and be able to send them a file - which they'd have to accept.
    The only reason your NetGear Router comes with a CD is to install and run their 'easy' step by step configuration program. It can also be done manually with a browser. Read the manual to find the IP address you must enter to access the router's configuration menu. Apple's WiFi routers don't require a CD to install the software because the configuration software is already on your computer.
    I do have my firewall turned on. AntiVirus software isn't a bad idea - I use Sophos having tested it for a review for our local User Group and I found I liked it better than ClamAVx which is what I'd been using before. Both are free.
    4) I think you were scammed by Sprint and BestBuy. I use hotel, coffee shop, and restaurant WiFi spots and have for years. However, because they can be unsecured, I do not shop online or bank when I'm using them. I also use 1Password and don't reuse passwords so even if a sniffer should grab an account and password that's all it would get - one account.
    5) Raid doesn't really make sense with a MacBook Air - a RAID involves 2 or more disks being used as if they were one.
    6) Zip drive? No. External hard drive - yes. It isn't a question of if a computer's hard drive will malfunction, it is when. OWC has a nice selection of external drives and the Mac has a built in backup system called TimeMachine. Due to the way TimeMachine works, I've found that your TimeMachine drive should be at least twice as large - and preferably 3-4 times as large as the data you are backing up.
    7) if your printer works and it has Mountain Lion drivers, why replace it?
    8) Online banking is done with a browser - Use Safari or FireFox
    9) If trading files with Windows users is important Mac: Office is your best bet. If not, iWork, Mac:Office, or LibreOffice are all good possibilities.
    10) you can only buy Mt Lion via the App Store.

  • Questions about Indexing and Using an Indexing POA

    Although I have only about 50 users, at least 15 of them have in excess of 100,000 messages in their accounts and the POA (version 7.0.2) is regularly slowing to a crawl. (I just know that plans for revolution are fomenting!) I have embarked on a campaign to reduce these accounts by archiving everything off to get mail accounts down to 3000 or fewer pieces. I have achieved user buy-in, but have worked on only a few users so far.
    In another closely related thread, it was suggested to me that the PO speed issues relate to broken indexes. And I suspect that given so many messages, the indexes were never getting fully rebuilt with the default QF POA settings. I am trying to fix that situation in addition to reducing mail account sizes. So, I have set up a second POA on another server and dedicated it to the indexing task. The /qfinterval is set for 1 hour, other /qf switches at default. The POA-QF does no mail delivery, but it does do nightly user upkeep.
    The POA-QF seems to be steadily working away and making progress at reducing the number of unindexed messages. However, I have questions about what I am seeing and what more I can do:
    1. Is the progress I am seeing real progress? For example I have a user with over 100,000 messages to be indexed and every time I check the logs, the count drops by about 500 messages per hourly QF cycle. I assume that if I just let it keep running, it will eventually get caught up and fixed. Not only with this user, but with all the others as well. Will my patience (and theirs) be rewarded? Are there any gotchas I need to prepare for?
    2. One user has recently had virtually all of her messages successfully moved to archive. I can see them in the Archive, and do not see them in the online account. However, now over a week later, QF still shows >130,000 items still left to index for that user. The POA-QF is making slow, steady progress reducing that number, but why is this user's QF count still so high? Does it just need more time, or is there something amiss for this user?
    3. I may want to rebuild indexes for single users from scratch. I have seen the TID 3105742 which tells how to do this: Essentially you turn off mail delivery functions, and make some other switch changes to dedicate the POA to indexing for just a single user, and then you let the POA rebuild the indexes. The implication of that scenario is that the POA is now enjoying exclusive access to the user's databases.
    If I want to use my secondary POA-QF to rebuild a user's index from scratch, does the main POA have to be offline and the user out of GWise? That is, Does the QF process require exclusive access in order to rebuild indexes from scratch?
    Thanks for any thoughts or suggestions.
    Peter Smick

    pgsmick wrote:
    > 1. Is the progress I am seeing real progress? For example I have a user with
    > over 100,000 messages to be indexed and every time I check the logs, the count
    > drops by about 500 messages per hourly QF cycle. I assume that if I just let
    > it keep running, it will eventually get caught up and fixed. Not only with
    > this user, but with all the others as well. Will my patience (and theirs) be
    > rewarded? Are there any gotchas I need to prepare for?
    Set this switch for this indexing POA - /qflevel=999 - this will index
    everything in one run. It will take a long time, but with no qflevel switch you
    are indeed only indexing 500 messages at a time, and if the user has that much
    mail, it might never really catch up.
    >
    > 2. One user has recently had virtually all of her messages successfully moved
    > to archive. I can see them in the Archive, and do not see them in the online
    > account. However, now over a week later, QF still shows >130,000 items still
    > left to index for that user. The POA-QF is making slow, steady progress
    > reducing that number, but why is this user's QF count still so high? Does it
    > just need more time, or is there something amiss for this user?
    >
    This is odd, because really the index count should drop to nothing, but with the
    above switch this might get resolved as well.
    > 3. I may want to rebuild indexes for single users from scratch. I have seen
    > the TID 3105742 which tells how to do this: Essentially you turn off mail
    > delivery functions, and make some other switch changes to dedicate the POA to
    > indexing for just a single user, and then you let the POA rebuild the indexes.
    > The implication of that scenario is that the POA is now enjoying exclusive
    > access to the user's databases.
    Not really - the POA is not enjoying exclusive access to the user's database,
    the indexer is just avoiding an attempt to index anything else.
    > If I want to use my secondary POA-QF to rebuild a user's index from scratch,
    > does the main POA have to be offline and the user out of GWise? That is, Does
    > the QF process require exclusive access in order to rebuild indexes from
    > scratch?
    No - QF never requires exclusive access. That said, you may find that an
    extremely vigorous QF can cause slowdowns for the user.
    Danita
    Novell Knowledge Partner
    Moving GroupWise to Linux?
    http://www.caledonia.net/gwmove.html

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • Questions about Contracts and New Phones

    I've had some questions about how upgrading and adding new lines to a contract works and I haven't been able to find the answers through Google. First some background information: We started this most recent contract in September of 2011 and as such the current contract will end September of 2013. According to the phone information portal, all three of the devices on my account will be eligible for an upgrade this Saturday (May 4th, 2013). My daughters birthday just so happens to coincide with that date and as such we were planning on surprising her by taking her to buy a new phone and we are going to allow our son to upgrade his phone as he has been asking to for a while.
    Is there any way of locking/limiting the amount of data allowed for each phone on a shared plan? I'm worried that my daughter will go way over our limit and not pay attention to the point where she will end up costing us a very large amount of money
    Does adding a new line require the start of a new contract?
    My son was interested in purchasing the "Samsung Galaxy S4" and has noticed that pre-orders for that phone are currently available. Is it possible to pre-order the phone on that date using the upgrade so that the price will be reduced?
    If the answer to number 2 is a no, does adding the new line take away the upgrades from other phones or would he be able to go to the store past the release date and upgrade then?
    Thanks for taking the time to read through all of this. Any help that I receive will be extremely appreciated.

    1. Yes. There is an additional fee per line you wish to do this for. When you add the line you can choose this option and set the data amount it is allowed to use.
    2. Yes. Each line is a separate contract with its own expiration date.
    3. The contract starts when you sign up for the contract. The amount will be pro-rated on the plan if it is not started until part-way through your billing cycle that is what you are asking. Doing the pre-order just guarantees you the phone in case they sell out.
    4. No it will not take any upgrades away as each line has its own termination date. You can move upgrades around between lines though.

  • Questions about buying a new Mac Pro for 4k video editing.

    Hi everyone,
    I'm currently looking into buying a new mac pro and I have a few questions. I'm a filmmaker/freelance editor looking to get a system that can handle any/at least most 4k formats that I might throw at it, and will hopefully last me around 7 years or so, like my last mac pro has. I've saved up about $5,300 and am becoming more obsessed with getting it asap, but am willing to wait a bit and save up more if necessary. I also play the occasional elder scrolls or civilization game, and might run windows on the new system as well. So here are my questions:
    1. I've read rumors that a newer build could be released this year, with newer processors and graphics cards. Is there anything to point to when? I tend to buy things a month before a newer version is released, and I'd like to prevent myself from doing it this time around..
    Here's the Build I'm looking at:
    6 core
    2 D700s
    base ram to be upgraded myself to 32Gb (2x16Gb cards leaving 2 slots empty to expand to 64Gb later)
    512Gb-1Tb internal hd
    2. Should I be considering the 8 core? I'm not too excited about the additional $1500, but I want a system that will last.
    3. Is getting the two 16Gb chips of ram and leaving two slots empty a bad idea?
    4. I currently work with FCP studio 2 and love it. Not sure whether to go with FCP X, or adobe. Any thoughts?
    5. I'm not finding many deals for cheaper ram and hard drives. OWC's prices seem to be comparable to Mac's. I want to do the ram so I have room to upgrade to 64Gb later, but are there any hard drives out there that would make it worth upgrading it myself?
    I appreciate any insights you might have. I plan on getting a decent raid and 4k monitor in the next year or so, but for now just want a base system that will keep me editing and will be ready for 4k when I take that next step.

    The late 2013 Mc Pro uses Intel Xeon ECC processors (error correction), and as far as I know Intel has not announced any newer Intel Xeon processors than those in the late 2013 Mac Pro.  I would not expect to see an update to the 2013 Mac Pro until the end of 2015 at the earliest and probably later than that.
    If time is not an issue, then you should be quite happy with the 6 core 2013 Mac Pro.  It will do an excellent job with 4K video footage. And, yes, I would suggest getting the best raid system you can afford.  That is actually more important than processor speed since I/O is frequently the bottleneck when doing multi camera video or 4K video.
    I have the latest version of Adobe's Premiere Pro 2014  CC installed on my late 2013 Mac Pro and i have used it a bit without problems.  However, I find it much much slower to edit with than FCP X.  Also be advised that if you Google you will find several individuals on the Adobe Forums who purchased the late 2013 Mac Pro and have not been able to use it with Premiere Pro CC because of either a hardware incompatibility or software issues between Premiere Pro CC and BMD's Resolve.  It is quite possible that I have not experienced these problems because I have not made very demanding projects with Premiere Pro CC on my 2013 Mac Pro.
    I strongly recommend FCP X.  Apple released FCP X before it was ready, and many early users were unwilling to take the time to learn how to use this very different NLE which is not track based.  Apple has over the last 3 years since FCP X was released, issued more than 10 updates (all free), and the program is stable and blazingly fast.  I urge you to check out the FCP X training offered by Ripple Training and/or Larry Jordan. Both are inexpensive, and worth every cent.  Watch their training videos and you will be up to speed in FCP X in no time at all, and you will wish you had switched a long time ago.
    If you can afford the 1 TB of PCie internal flash storage on your Mac Pro, then by all means get it.  For me 1TB is well worth the cost.
    As far as editing 4K video, the format of the video will be important to the ease of editing.  For example, I am able to edit in its native format (XVAC S) several streams of 4K video form my Sony FDR-AX100 with no problems.  If I were editing Sony's XVAC format used in their professional 4K cameras, that might pose a problem that would require transcoding.  Similarly for other 4K formats. XVAC S is an easy format to edit natively because it is essentially a high bit rate h.264 format.
    Best of luck on whatever you decide to do, and happy editing.
    Tom

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • Question about the Documentat​ion Tags for Source Code

    Hello,
    I have a question about CVI's automatic source code documentation. My problem is that is seems like you need to write all documentation for a specific tag on one line. If you don't, a line break will be inserted when the documentation is displayed. Suppose I want to write a large amount of documentation for the function itself, using the HIFN tag. If I don't want linebreaks to be forced in the documentation, I need to write all this documentation on one single line, which kinda messes up my code. If I split the documentation over several HIFN tags, the documentation displayed to the user might look messed up because of all the linebreaks. Is there any escape character I can put at the end of a line, allowing me to split the documentation of several HIFN lines without forcing linebreaks in the documentation?
    Thanks!
    GEMIDIS - Innovating Display Technology
    HQ Ghent, Belgium

    This information is certainly useful. Note, however, that it can also be found in the documentation
    Tag
    Description
    /// HIFN help text
    Specifies the help text for the function. Use multiple /// HIFN tags to display help text for the function on separate lines. To separate help text with an empty line, use /// HIFN on a line by itself. You also can use HTML tags, but you must enclose the tags in <HTML><BODY></BODY></HTML> tags.
    Example
    /// HIFN SampleFunction returns the value of a control.
    int SampleFunction (int controlID, ctrlType controlType, char label[], double *value)
         SomeAction;

  • Few questions about mac pro before buying

    Hello all!
    I may be buying a mac pro in a month or so and have a few questions about it.
    1. ) Any one know when Apple may release the latest mac pro? Don't wanna buy some thing that'll be significantly obsolete in a few days
    2. ) Is it possible to use SATA optical drives internally? I have a SATA blu ray drive I'd like to take from my PC, if that's possible.
    3. ) Is it possible to get a graphics cards better than the 8800 GT for video games? (For use in OS X)
    4. ) Finally, is there a significant performance difference between Apple's seemingly super expensive RAM and 3rd party RAM? (Other than the heat syncs)
    Thanks for any replies.

    1. We have no idea when Apple will release any product until it's released. This is a user-to-user site so we have no more information about future products than you do.
    2. Yes, it's possible. There are two additional SATA ports on the motherboard.
    3. The only graphic cards are those provided by Apple when you order the computer as well as the non-Apple supplied ATI HD-3870 Mac/PC card. The 8800 and 3870 have close performance specs. You can find some benchmarks at Bare Feats.
    4. There's no difference between the RAM Apple installs and what you can purchase on the open market. As long as the RAM meets the required specifications and has the requisite heat sink it will work just fine.

Maybe you are looking for

  • Can't create a new file

    When trying to create a new file, I get the following prompt and AI hangs and stops opening existing files. "Can't create a new illustration. Problems occurred while saving the print file. -1" Anyone know what's going on? - CS6 AI v16.2.0 - MacBook P

  • Payment Manager performance is too bad . Anyone please suggest for improvement

    Hi All, Our payment manager performance is too bad. Its taking 3 minutes to open payment manager screen  and sometimes its not opening . Please help me to resolve the issue

  • SQL Expression returning too many results

    I am trying to display the value of a field in one table as a caption in the report header. This  caption is above a u2018descriptionu2019 field in the details section. When I run my report, I get  the description field displaying 832 times; which is

  • Insert new sequence (project) into an existing project?

    For a slide show mixing stills and video over songs: Project was "finished," with images arranged and sequeing as I wanted, and matched to three songs. Including credits, all nicely wrapped up. I now have to add more images and 1 new song. *This new

  • Pxi-4071 aux I/O how to output a control signal?

    HI, I am working on Pxi-4071 dmm card. I want to use aux I/O to out put a control signal for a relay on/off. Technically, I want to know how to do it with labview. Is any examples to start? Thanks Liming