What EJB knows about client

Hi,
What information about client application can I get on EJB? For example Thread name of client that called this EJB.
It will be OK even if it's server specific API. I need it for Websphere and Weblogic.
What do I need it for? We have several applications calling each other via EJB. I'd like to trace transactions across the applications. For example user calls Web server W, W calls EJB on server A and server A calls EJB on server B. I'd like to identify and trace this call on W, A and B. Of course we can add some new ID etc. But I'd like to do it without adding new fields, if it's possible.
Principal of the client will not help since server A makes it's own login to B, not per user.
Thanks in advance.

double post [http://forums.sun.com/thread.jspa?threadID=5433084|http://forums.sun.com/thread.jspa?threadID=5433084]

Similar Messages

  • How the ejb know about an authenticated user?

    Hi
    perhaps it is a dummy question but I didn't find any explicit answer.
    I'd like to know how the ejb can know if a user belongs to a role.
    For example if I want to access my ejb with a servlet, and this servlet is protected
    (access allowed only for group customer)
    and in my bean's deployment descriptor I have protected one method with the security
    role customer)
    What I am wondering is when the user is authenticated in the servlet as a customer
    and then try to access the method how the ejb knows that this user is in the group
    customer??
    Is this information included in the http session or in the initialcontext created
    in the servlet or somewhere else??
    thanks for your help
    romain

    Romain - I think the answer to your question is that the information
    identifying the user is passed into the initial context parameters. Weblogic
    uses this to propagate the security context from the servlet container to
    the ejb container.
    cheers,
    Markus
    "romain" <[email protected]> wrote in message
    news:3b0e8b23$[email protected]..
    >
    Hi
    perhaps it is a dummy question but I didn't find any explicit answer.
    I'd like to know how the ejb can know if a user belongs to a role.
    For example if I want to access my ejb with a servlet, and this servlet isprotected
    (access allowed only for group customer)
    and in my bean's deployment descriptor I have protected one method withthe security
    role customer)
    What I am wondering is when the user is authenticated in the servlet as acustomer
    and then try to access the method how the ejb knows that this user is inthe group
    customer??
    Is this information included in the http session or in the initialcontextcreated
    in the servlet or somewhere else??
    thanks for your help
    romain

  • Making iTunes forget what it knows about a CD

    How do I make iTunes (for Mac) forget that it has ever seen a CD and everything it knows about it?
    I put a CD into my iMac and iTunes popped up a window showing me the name of the CD three times, with slight variations, and I just took the default. I believe that was a mistake and I would like to make that choice over again. But I can't because iTunes remembers it has previously seen that CD. For example, it remembers the names of all the tracks. I would like it to forget all the track names and everything else it knows about that CD. How do I do that?

    Thanks. I did not want to have iTunes forget all my CDs, just one of them, but your answer solved my problem. I temporarily renamed ~/Library/Preferences/CD Info.cidb and that was sufficient to make iTunes requery the database for the track names. That is what I wanted it to do, because that gave me the chance to select which online entry it read. It turned out that my original choice was okay after all; so I just put the CD Info.cidb file back.
    Thanks again.

  • What to know about Sterling Gentran Integration Suite (GIS) ????

    Hi All,
    My upcoming client is using SAP XI3.0 and GIS tools and now they want to upgrade into SAP PI7.1
    i need your help in knowing the GIS tool, any one how worked in GIS and know about GIS....please share there experience and knowledge....share some links and documents for the same.
    Thanks in advance for you help.
    Regards,
    Chinna

    I do not think that you need to know more than the basics of Gentran, as you will have to upgrade PI, not Gentran.
    Just check the Homepage of Gentran to find an overview about it:
    http://www.sterlingcommerce.com/products/b2b-integration/sterling-integrator/
    or find more information with google.

  • What we know about Blu-Ray

    I have done quite a bit of research regarding Blu-Ray on Apple (or lack thereof) and here is what I have uncovered so far.
    What I know:
    The new DB+ encryption (used with commercial Blu-Ray movies) uses a new from of DRM which implements both a hardware and software decryption key to play movies (so only authorized blu-ray drives and playback software can be used to playback a blu-ray movie). Lacie's d2 Blu-ray drive is authorized to play Blu-ray movies but, so far, do to lack of support from Apple, the only way we can watch them is if we do so via Bootcamp. Furthermore Slysoft's AnyDVD HD will decrypt a Blu-Ray movie and even allow it to play on ACD graphic cards and monitors but this is also via Bootcamp since the software is Windows only. Therefore, it is currently possible to rip Blu-Ray movies in Windows and transfer them to the Mac OS. It is also possible to burn Blu-Ray (data) disks in Mac since the Lacie drive is cross-platform compatible.
    What I would like to know:
    Theoretically, when Apple finally does start supporting BD+ movies, the Lacie drive will have a leg up since it already has the hardware keys installed but this is pure speculation. There are two other drives I know of that are offered for the Mac Pro, one is sold by Fastmac and the other by MCE Technologies. These are both sold as 'exclusively' Mac drives.
    First question, has anyone been able to find out if either of these drives are recognizable in Windows (via Bootcamp)?
    Second, has anyone determined if they are capable of playing Blu-Ray movies (i.e. Do they contain the BD+ hardware keys)?
    I contacted both Fastmac and MCE to ask these questions but Fastmac did not send a reply back and MCE sent a confused reply, not knowing what I meant by 'BD+ encryption key' which would leave me to believe they don't but maybe their marketing department doesn't like to talk to the engineers.
    Has anyone had any experience with these devices or uncovered information I have missed?

    Well, yes. All three of the afore mentioned drives will work with a Mac Pro. The Lacie drive is external so it can be used with numerous other Macs as well but the MCE and Fastmac drives plug into the Mac Pro's optical bay. So far as your HD camera is concerned, you simply import your video from the camera and Final Cut it (or whatever else you do to it) then use the newest version of Toast, which supports Blu-Ray, to burn your finished content. That will work with no issues. My concerns are focused more around which drives will work cross-platfrom (Lacie being the only confirmed drive so far) and which will be able to play encrypted Blu-Ray movies which, the Lacie, again, being the only confirmed, Mac compatable, BD+ player, even though it cannot yet play encrypted Blu-Ray movies on the Mac.
    However, your movies, if you decided to go Blu-Ray, would not be encrypted and therefore would not have any issues playing on a Mac. So long as you have no concern of your Blu-Ray drive working with Windows, you can go with any of them and be just fine.

  • What you know about this?

    Well first of all hi, ok...im interested in buying the nano ipod, but i have recently read that it scratches easily and that the battery is too short( i mean the period of time that you use it).
    I would apreciatte all your answers, and opinions about this. Thx.

    They do scratch easily -- all iPods do. Get a case of some type. (I've ordered an Agent 18 shield for mine. Go to the Apple store for that one and others.)
    Battery life is supposed to be about 14 hours between charges, right? I never listen to mine more than a few hours at a time, so no problem for me. Your mileage may vary.

  • Eresa Napper Here's what we know about your issue so far. It relates to:

    Photoshop change Downloading, installing, setting up change Serial numbers and redemption codes change Chat is unavailable right now. Thank you for your patience. While you wait, you can try our community forums where experts are available 24 hours a day, 7 days a week. ok I can't find my Adobe Photoshop and elements since they changed my operating system from 7 to 8 cans someone help

    Photoshop downloads are in this area.
    http://helpx.adobe.com/x-productkb/policy-pricing/cs6-product-downloads.html
    You would have to asked in the Photoshop Elements forum for download locations.
    Photoshop Elements
    Gene

  • Chat Your case number: 0215462990 david stock Here's what we know about your issue so far. It relates to: Lightroom change Adobe ID and signing in change Thank you for your patience.  While you wait, you can try our community forums where experts are avai

    Cant download LR5 updates ... This is the 'help' Ive received so far ...

    To update to 5.7.1 go to the LR5 menu and click Help >> Updates. To buy LR6 use this link, click the Buy button and choose upgrade price. You can download from  here

  • Know about EJB.

    Hi,
    I dont have any idea about EJB and also how the EJB is differ from java beans.
    Thanks in advance.

    Romain - I think the answer to your question is that the information
    identifying the user is passed into the initial context parameters. Weblogic
    uses this to propagate the security context from the servlet container to
    the ejb container.
    cheers,
    Markus
    "romain" <[email protected]> wrote in message
    news:3b0e8b23$[email protected]..
    >
    Hi
    perhaps it is a dummy question but I didn't find any explicit answer.
    I'd like to know how the ejb can know if a user belongs to a role.
    For example if I want to access my ejb with a servlet, and this servlet isprotected
    (access allowed only for group customer)
    and in my bean's deployment descriptor I have protected one method withthe security
    role customer)
    What I am wondering is when the user is authenticated in the servlet as acustomer
    and then try to access the method how the ejb knows that this user is inthe group
    customer??
    Is this information included in the http session or in the initialcontextcreated
    in the servlet or somewhere else??
    thanks for your help
    romain

  • What do you know about Kerio Mail?

    We were considering converting to a Kerio mail server program. Has anyone else used or using it? Anything we should know about?

    We are just now in the process of switching to Kerio from Tiger Server's postfix bundle. We have a large Open Directory network and we have about 20 different virtual mail domains. Apple's solution has been, well, not a joy to use and admin and we've had continued problems with SMTP queue lockups.
    What finally tipped the balance towards Kerio was our purchase of another company with 160 Exchange users. I need a server with some degree of Exchange-like functionality as well as Open Directory integration and good virtual domain support. Kerio seemed like a good solution.
    The first thing you need to know about Kerio is that Open Directory integration works very, very well - unless you need to break your directory users into separate domains in Kerio. Then it falls apart.
    What happens is that Kerio installs some schema extensions to Open Directory and stores user account data in those extensions. One of these bits of data is a flag that indicates to Kerio whether or not a user account has been activated in Kerio. What it fails to do is indicate which of Kerio's domains the user belongs to.
    The result is that all Open Directory-based users wind up being members of all of your mail domains since Kerio simply looks for activated members in the directory service.
    We worked on ways around this, including hacking OU support into Open Directory (at Kerio's suggestion - http://www.afp548.com/filemgmt_data/files/Customizing%20Open%20Directory.pdf), but nothing worked. In the end, Kerio admitted that this is a problem and escalated the ticket so that a fix should arrive in a future revision. A good result, I think, and nice to see a responsive software company.
    The result is that we now have to create local Kerio mail accounts, which is not really much of an issue since Kerio offers user templates. But what we've learned is that even without the direct Open Directory integration, we can still use Kerberos for authentication to the Open Directory accounts. This makes a big difference for us in security and user account management.
    One thing that is going to be tough as we roll Kerio out to more users this week is migrating messages and mailboxes from the Tiger Server mail system to Kerio. My contact at Kerio provided me with an unsupported perl script to move mailboxes and messages but it's not perfect. I moved a small domain last night (<10 users) and only one worked properly. The rest I had to move manually.
    My understanding is that the migration tool for Exchange users is much better and fully supported. We'll be looking to move our new batch of Exchange users in the next month or so and we'll see how well (or not) it works then.
    I've also not tested the Kerio Outlook Connector software but the ability for Kerio to masquerade as Exchange seems to work fine when syncing from Address Book.
    If you have mobile clients, look into the new 6.3 beta as they've included over-the-air syncing with ActiveSync clients and direct-push and remote wipe for Windows Mobile 2005 clients. <br>
    MacBook Pro 2.0GHz   Mac OS X (10.4.7)  

  • A few days ago i bought the macbook pro in a Providence. In late Summer i will come back to my Country - Ukraine. I would like to know about a tax for my laptop. Can i return tax? Through the TAX FREE or return in airport? What should i do?

    A few days ago i bought the macbook pro in a Providence. In late Summer i will come back to my Country - Ukraine. I would like to know about a tax for my
    laptop. Can i return tax? Through the TAX FREE or return in airport? What should i do?

    You need to talk with the tax authorities in the countries to which you traveled and that of your home country. We are all end-users liek you and not Apple agents.

  • My iphone was stolen and we actually recovered it!  Is there a way I can tell what the thief might have looked at in the interim? I know about hitting the home button twice to see recently used apps, but what if they then swiped to close them?  TIA

    My iphone was stolen and we actually recovered it using the find my iphone app!  Is there a way I can tell what the thief might have looked at in the interim? I know about hitting the home button twice to see recently used apps, but what if they then swiped to close them?  i just want to know if this <bleep> was looking through my private info.  TIA

    Sorry marcia,
    There is no way to tell what activity went on on your device when it was out of your hands.
    Sorry,
    GB

  • Does anyone know what to do about iCal colors changing.

    Does anyone know what to do about iCal colors changing.   Ever since ios7 came out I have many problems.  I kind of feel like Apple has become Microsoft at twice the price??  Anyway, today all my calendars (iPhone, iPad, iCloud and on my Mac) all changed colors from the customer colors I had set.  I have about 5 calendars.
    I was able to change them back on my Mac, my iphone and my ipad, but not in iCloud.   Once each of the ones I changed updates / syncs it reverts to the incorrect colors again.  I have tested back and forth, and it is definetly coming out of the iCloud.com.  The web site will not let me chance colors, so once it updates / sync, all my other devices and calendars sync to the purple in iCloud.
    I could go on and on about my frustrations with the apparent lack to testing, and the irony around Microsoft and their "bugs".  It is more important that I find a cure, as this is really annoying.
    Has anyone else seen this today

    Don't spend a lot of time trying to change it. It's an iCloud problem that's happened before, and when it happened in August it went back to the correct colors in a few days. Just be careful when you add new items to the calendar to use the right calendar (even if the color is wrong), otherwise you'll have things out of synch when the correct colors come back.

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

Maybe you are looking for