More questions re: UTF-8 and Latin encoding

Hi
I've really tried to do my homework before posting here, so I hope I'm not double-posting...
I've read all the discussions I can find, followed instructions on http://homepage.mac.com/thgewecke/iwebchars.html and spoken twice to my ISP who provides the hosting service.
My ISP is, of course, forcing the browser to interpret the HTML as Latin, and if I change it manually to UTF-8, my page looks fine.
My ISP is, of course, blaming Apple, saying that Macs use a slightly different UTF-8 Character set. I suspect that's bullocks.
My ISP is also blaming iWeb, saying that it is designed to only work with .mac. I've said that's not the case, that it can publish seperately to a folder for FTP upload.
My ISP insists that correctly coded UTF-8 pages will work fine, and that their server is not forcing the browser to interpret it any way or the other.
Is there any way I can prove that the HTML is correct, and that the server IS forcing browsers to interpret the character set as Latin?
If I can prove that, then I believe I have a valid case for them to change it, and accept the problem is theirs.
I would greatly appreciate suggestions from those who understand these matters better than me. Including, if you think it appropriate, what my next approach to the ISP should include...?
Many thanks to anyone who takes the time to help.
Kind regards
Steve
PS I should also mention that my ISP does not allow .htaccess files to be placed, administered, or used by users.
Can't get another host. Can't afford .mac.

All the statements made by your ISP are totally wrong, of course. If you want to prove that their server forces Latin-1, you can put your url into a site like this one:
http://web-sniffer.net/
It shows that the HTTP response header sent from their server tells all browsers that "Content-Type: = text/html; charset=iso-8859-1"
From your description it doesn't sound like they are capable of understanding this problem or fixing it for you. If that is true, and if you really have no other choice, I would suggest that before uploading you open your pages with TextEdit (set to Plain text, ignore rich text commands in html, and UTF-8 encoding) and just do Save As after setting the encoding to Western ISO Latin-1.

Similar Messages

  • [svn:fx-trunk] 7661: Change from charset=iso-8859-1" to charset=utf-8" and save file with utf-8 encoding.

    Revision: 7661
    Author:   [email protected]
    Date:     2009-06-08 17:50:12 -0700 (Mon, 08 Jun 2009)
    Log Message:
    Change from charset=iso-8859-1" to charset=utf-8" and save file with utf-8 encoding.
    QA Notes:
    Doc Notes:
    Bugs: SDK-21636
    Reviewers: Corey
    Ticket Links:
        http://bugs.adobe.com/jira/browse/iso-8859
        http://bugs.adobe.com/jira/browse/utf-8
        http://bugs.adobe.com/jira/browse/utf-8
        http://bugs.adobe.com/jira/browse/SDK-21636
    Modified Paths:
        flex/sdk/trunk/templates/swfobject/index.template.html

    same problem here with wl8.1
    have you sold it and if yes, how?
    thanks

  • In order to install IOS7 on my 4S, I need more storage of  3.1GB and presently have only 1.3 GB. My biggest usage is the camera roll at 9.8GB.  Photo stream is at 584 MB. My question is -should I delete the camera roll pics? They are saved on my computer.

    In order to install IOS7 on my 4S, I need more storage of  3.1GB and presently have only 1.3 GB. My biggest usage is the camera roll at 9.8GB.  Photo stream is at 584 MB. My question is -should I delete the camera roll pics? They are saved on my computer.

    At 1.3GB available, your storage space is getting very thin no matter what you do further.  As long as you have the photo's in the camera roll saved on your computer you can delete them.  Just make sure first that they are in your photo software.  If updating in iTunes, just backup/sync first, then update, and that will save the camera roll in the backup as well.
    At only 1.3GB you eventually are going to find yourself unable to install or update some app, or unable to take any new pictures, or some other limitation that will force you to free up space anyway.  So now, before updating, sounds like a good time to make sure everything is backed up or stored somewhere else, and clean house a bit, then update.

  • Firefox 4b7 does not complete «More Answers from-» action on Formspring.me userpages; previous Firefox (3.6) was able to load more questions and answers

    Even in safe mode, Firefox 4b7 is not able to complete «More Answers from…» action on Formspring.me userpages, it just displays «loading more questions…» for a seemingly endless amount of time. (Firefox 3.6 and any other browser, such as Safari, displays «loading more questions…» for a short period of time when AJAX works, then the questions and answers are actually loaded and displayed.) In order to reproduce, load Firefox 4b7 in Safe Mode, visit www.formspring.me/krylov and click on «More Answers from Konstantin Krylov» (the bottom link). You may try any other user, www.formspring.me/teotmin or www.formspring.me/anandaya for example.

    what a waste of money sending an engineer to "fix a fault" which does not exist.  Precisely.
    In my original BE post to which Tom so helpfully responded, I began:  It seems to me that DLM is an excellent concept with a highly flawed implementation, both technically and administratively.   I think that sending out an engineer to fix an obviously flawed profile is the main example of an adminastrative flaw.  I understand (I can't remember source, maybe Tom again) that they are sometimes relaxing the requirement for a visit before reset.
    Maybe the DLM system is too keen on stability vs speed.  This will keep complaints down from many people: most users won't notice speed too much as long as it is reasonable, but will be upset if their Skype calls and browsing are being interrupted too often.  
    However, it does lead to complaints from people who notice the drops after an incidence (as in your thread that has drawn lots of interest), or who only get 50 instead of 60.  The main technical flaw is that DLM can so easily be confused by drops from loss of power, too much modem recycling, etc, and then takes so long to recover.

  • Thanks.. can u please help me out in one more question. how can i transfer files like pdf, .docx and ppt from my laptop to iPhone 5 ? please its urgent.

    thanks.. can u please help me out in one more question. how can i transfer files like pdf, .docx and ppt from my laptop to iPhone 5 ? please its urgent.

    See your other post
    First, i want to know how can i pair my iPhone 5 with my lenovo laptop?

  • Japanese, Question Marks, Locales, Eclipse, and Windows XP ????

    Hello. I am having some issues localizing JSP to Japanese. I have read a lot of stuff on the topic. I have my .properties file in unicode with native2ascii, etc.
    When I debug under Eclipse 3.0, I see the Japanese characters correctly displayed in my properties file and inside of strings internal to to the program. However, when I try to print them with a System.out.println, I get question marks (??????).
    My reading tells me that the ???? indicate that the characters cannot be displayed. I am somewhat confused because in the same Eclipse context I can clearly see the Japanese characters in the debugging window.
    Thus I am missing the part where I set my standard output to correctly display the characters like Eclipse is displaying them in windows other than the "Console" window.
    My default encoding is CP1252. If I do something like:
    out = new java.io.PrintStream(System.out, true, "UTF-8");
    and print my unicoded resource from the bundle I get the UTF-8 character representation (��������������������������������������������). With System.out.println I get ?????
    My first reaction would be that the Japanese fonts aren't on my system, but clearly they are as I can see them in other windows.
    When I try to show a Japanese resource on the web page that is the result of the jsp file I get ????. I can display the same characters UTF-8 encoded in a php page.
    Here is another example:
    java.util.Locale[] locales = { new java.util.Locale("en", "US"), new java.util.Locale("ja","JP"), new java.util.Locale("es", "ES"),
    new java.util.Locale("it", "IT") };
    for (int x=0; x< locales.length; ++x) {
    String displayLanguage = locales[x].getDisplayLanguage(locales[x]);
    System.out.println(locales[x].toString() + ": " + displayLanguage);
    displays:
    en_US: English
    ja_JP: ???
    es_ES: espa�ol
    it_IT: italiano
    instead of the correct Japanese characters.
    What's the secret?
    Thanks.
    -- Gary

    What do you want to do exactly? 1. Making a window application? 2. Making a console application? 3. Making a JSP webpage?
    1. If it's window application, there's nothing to worry about if you use Swing. But if you use AWT, it's time to switch to Swing.
    2. If you're making console application, solution does exist as others had pointed out but you'd better forget it because no console in whatever platform supports Unicode (Linux xterm may be an exception? But it probably has font problem). So, even if you could display characters in your computer, the solution isn't universal. You can't ask your every user to switch system locale, install whatever font in order to display a few characters!!
    3. If you're making JSP, I'd advise you to use UTF-8 in webpages. Most browsers nowadays (probably more than 90%) could support UTF-8. All you need is to add the following JSP headers to your every page:
    <%@ page contentType="text/html;charset=utf-8" %>
    <%@ page pageEncoding="iso-8859-1" %>Now, your every out.println(s); will send the correct data to browser without the least effort from you. All conversions are automatic!
    However, just to make things even surer, you could add this HTML meta header:
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">You use Tomcat, right? I do, and I don't have any problem.
    Last words:
    But, if all you want to do with System.out.println is for debugging, you could use
    JOptionPane.showMessageDialog(null, "your string here");But you'd better have Java 5, or at least 1.4.2, if you want to have everything displayed correctly.

  • Unicode, UTF-8 and java servlet woes

    Hi,
    I'm writing a content management system for a website about russian music.
    One problem I'm having is trying to get a java servlet to talk Unicode to the Content mangament client.
    The client makes a request for a band, the server then sends the XML to the client.
    The XML reading works fine and the client displays the unicode fine from an XML file read locally (so the XMLReader class works fine).
    The servlet unmarshals the request perfectly (its just a filename).
    I then find the correct class, and pass it through the XML writer. that returns the XML as string, that I simply put into the output stream.
    out.write(XMLWrite(selectedBand));I have set correct header property
    response.setContentType("text/xml; charset=UTF-8");And to read it I
             //Make our URL
             URL url = new URL(pageURL);
             HttpURLConnection conn = (HttpURLConnection)url.openConnection();
             conn.setRequestMethod("POST");
             conn.setDoOutput(true); // want to send
             conn.setRequestProperty( "Content-type", "application/x-www-form-urlencoded" );
             conn.setRequestProperty( "Content-length", Integer.toString(request.length()));
             conn.setRequestProperty("Content-Language", "en-US"); 
             //Add our paramaters
             OutputStream ost = conn.getOutputStream();
             PrintWriter pw = new PrintWriter(ost);
             pw.print("myRequest=" + URLEncoder.encode(request, "UTF-8")); // here we "send" our body!
             pw.flush();
             pw.close();
             //Get the input stream
             InputStream ois = conn.getInputStream();
                InputStreamReader read = new InputStreamReader(ois);
             //Read
             int i;
             String s="";
             Log.Debug("XMLServerConnection", "Responce follows:");
             while((i = read.read()) != -1 ){
              System.out.print((char)i);
              s += (char)i;
             return s;now when I print
    read.getEncoding()It claims:
    ISO8859_1Somethings wrong there, so if I force it to accept UTF-8:
    InputStreamReader read = new InputStreamReader(ois,"UTF-8");It now claims its
    UTF8However all of the data has lost its unicode, any unicode character is replaced with a question mark character! This happens even when I don't force the input stream to be UTF-8
    More so if I view the page in my browser, it does the same thing.
    I've had a look around and I can't see a solution to this. Have I set something up wrong?
    I've set, "-encoding utf8" as a compiler flag, but I don't think this would affect it.

    I don't know what your problem is but I do have a couple of comments -
    1) In conn.setRequestProperty( "Content-length", Integer.toString(request.length())); the length of your content is not request.length(). It is the length of th URL encoded data.
    2) Why do you need to send URL encoded data? Why not just send the bytes.
    3) If you send bytes then you can write straight to the OutputStream and you won't need to convert to characters to write to PrintWriter.
    4) Since you are reading from the connection you need to setDoInput() to true.
    5) You need to get the character encoding from the response so that you can specify the encoding in           InputStreamReader read = new InputStreamReader(ois, characterEncoding);
    6) Reading a single char at a time from an InputStream is very inefficient.

  • ColdFusion 11: custom serialisers. More questions than answers

    G'day:
    I am reposting this from my blog ("ColdFusion 11: custom serialisers. More questions than answers") at the suggestion of Adobe support:
    @dacCfml @ColdFusion Can you post your queries at http://t.co/8UF4uCajTC for all cfclient and mobile queries.— Anit Kumar Panda (@anitkumar85) April 29, 2014
    This particular question is not regarding <cfclient>, hence posting it on the regular forum, not on the mobile-specific one as Anit suggested. I have edited this in places to remove language that will be deemed inappropriate by the censors here. Changes I have made are in [square brackets]. The forums software here has broken some of the styling, but so be it.
    G'day:
    I've been wanting to write an article about the new custom serialiser one can have in ColdFusion 11, but having looked at it I have more questions than I have answers, so I have put it off. But, equally, I have no place to ask the questions, so I'm stymied. So I figured I'd write an article covering my initial questions. Maybe someone can answer then.
    ColdFusion 11 has added the notion of a custom serialiser a website can have (docs: "Support for pluggable serializer and deserializer"). The idea is that whilst Adobe can dictate the serialisation rules for its own data types, it cannot sensibly infer how a CFC instance might get serialised: as each CFC represents a different data "schema", there is no "one size fits all" approach to handling it. So this is where the custom serialiser comes in. Kind of. If it wasn't a bit rubbish. Here's my exploration thusfar.
    One can specify a custom serialiser by adding a setting to Application.cfc:
    component {     this.name = "serialiser01";     this.customSerializer="Serialiser"; }
    In this case the value - Serialiser - is the name of a CFC, eg:
    // Serialiser.cfccomponent {     public function canSerialize(){         logArgs(args=arguments, from=getFunctionCalledName());         return true;     }     public function canDeserialize(){         logArgs(args=arguments, from=getFunctionCalledName());         return true;     }     public function serialize(){         logArgs(args=arguments, from=getFunctionCalledName());         return "SERIALISED";     }     public function deserialize(){         logArgs(args=arguments, from=getFunctionCalledName());         return "DESERIALISED";     }     private function logArgs(required struct args, required string from){         var dumpFile = getDirectoryFromPath(getCurrentTemplatePath()) & "dump_#from#.html";         if (fileExists(dumpFile)){             fileDelete(dumpFile);         }         writeDump(var=args, label=from, output=dumpFile, format="html");     } }
    This CFC needs to implement four methods:
    canSerialize() - indicates whether something can be serialised by the serialiser;
    canDeserialize() - indicates whether something can be deserialised by the serialiser;
    serialize() - the function used to serialise something
    deserialize() - the function used to deserialise something
    I'm being purposely vague on those functions for a reason. I'll get to that.
    The first [issue] in the implementation here is that for the custom serialisation to work, all four of those methods must be implemented in the serisalisation CFC. So common sense would dictate that a way to enforce that would be to require the CFC to implement an interface. That's what interfaces are for. Now I know people will argue the merit of having interfaces in CFML, but I don't really give a [monkey's] about that: CFML has interfaces, and this is what they're for. So when one specifies the serialiser in Application.cfc and it doesn't fulfil the interface requirement, it should error. Right then. When one specifies the inappropriate tool for the job. What instead happens is if the functions are omitted, one will get erratic behaviour in the application, through to outright errors when ColdFusion goes to call the functions and cannot find it. EG: if I have canSerialize() but no serialize() method, CF will error when it comes to serialise something:
    JSON serialization failure: Unable to serialize to JSON.
    Reason : The method serialize was not found in component C:/wwwroot/scribble/shared/git/blogExamples/coldfusion/CF11/customerserialiser/Serialiser .cfc.
    The error occurred inC:/wwwroot/scribble/shared/git/blogExamples/coldfusion/CF11/customerserialiser/testBasic.c fm: line 4
    2 : o = new Basic();
    3 :
    4 : serialised = serializeJson(o);5 : writeDump([serialised]);
    6 :
    Note that the error comes when I go to serialise something, not when ColdFusion is told about the serialiser in the first place. This is just lazy/thoughtless implementation on the part of Adobe. It invites bugs, and is just sloppy.
    The second [issue] follows immediately on from this.
    Given my sample serialiser above, I then run this test code to examine some stuff:
    o = new Basic(); serialised = serializeJson(o); writeDump([serialised]); deserialised = deserializeJson(serialised); writeDump([deserialised]);
    So all I'm doing is using (de)serializeJson() as a baseline to see how the functions work. here's Basic.cfc, btw:
    component { }
    And the test output:
    array
    1
    SERIALISED
    array
    1
    DESERIALISED
    This is as one would expect. OK, so that "works". But now... you'll've noted I am logging the arguments each of the serialisation methods receives, as I got.
    Here's the arguments passed to canSerialize():
    canSerialize - struct
    1
    XML
    My reaction to that is: "[WTH]?" Why is canSerialize() being passed the string "XML" when I'm trying to serialise an object of type Basic.cfc?
    Here's the docs for canSerialize() (from the page I linked to earlier):
    CanSerialize - Returns a boolean value and takes the "Accept Type" of the request as the argument. You can return true if you want the customserialzer to serialize the data to the passed argument type.
    Again, back to "[WTH]?" What's the "Accept type" of the request? And what the hell has the request got to do with a call to serializeJson()? You might think that "Accept type" references some HTTP header or something, but there is no "Accept type" header in the HTTP spec (that I can find: "Hypertext Transfer Protocol -- HTTP/1.1: 14 Header Field Definitions"). There's an "Accept" header (in this case: "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"), and other ones like "Accept-Encoding", "Accept-Language"... but none of which contain a value of "XML". Even if there was... how would it be relevant to the question as to whether a Basic.cfc instance can be serialised? Raised as bug: 3750730.
    serialize() gets more sensible arguments:
    serialize - struct
    1
    https://www.blogger.com/nullserialize - component scribble.shared.git.blogExamples.coldfusion.CF11.customerserialiser.Basic
    2
    JSON
    So the first is the object to serialise (which surely should be part of the question canSerialize() is supposed to ask, and the format to serialise to. Cool.
    canDeserialize() is passed this:
    canDeserialize - struct
    1
    JSON
    I guess it's because it's being called from deserializeJson(), so it's legit to expect the input value is indeed JSON. Fair enough. (Note: I'm not actually passing it JSON, but that's beside the point here).
    And deserialize() is passed this:
    deserialize - struct
    1
    SERIALISED
    2
    JSON
    3
    [empty string]
    The first argument is the value to work on, and the second is the type of deserialisation to do. I have no idea what the third argument is for, and it's not mentioned directly or indirectly on that docs page. So dunno what the story is there.
    The next issue isn't a code-oriented one, but an implementation one: how the hell are we expected to work with this?
    The only way to work here is for each function to have a long array of IF/ELSEIF statements which somehow identify each object type that is serialisable, and then return true from canSerialise(), or in the case of serialize(), go ahead and do the serialisation. So this means this one CFC needs to know about everything which can be serialised in the entire application. Talk about a failure in "separation of concerns".
    You know the best way of determining if an object can be seriaslised? Ask it! Don't rely on something else needing to know. This can be achieved very easily in one of two ways:
    Check to see if the object implements a "Serializable" interface, which requires a serialize() method to exist.
    Or simply take the duck-typing approach: if a CFC implements a serialize() method: it can be serialised. By calling that method. Job done.
    Either approach would work fine, keeps things nicely encapsulated, and I see merits in both. And either make far more sense than Adobe's approach. Which is like something from the "OO Failures Special Needs" class.
    Deserialisation is trickier. Because it relies on somehow working out how to deserialise() an object. I'm not sure of the best approach here, but - again - how to deserialise something should be as close to the thing needing deserialisation as possible. IE: something in the serialised data itself which can be used to bootstrap the process.
    This could simply be a matter of specifying a CFC type at a known place in the serialised data. EG: Adobe stipulates that if the serialised data is JSON, and at the top level of the JSON is a key eg: type, and the value is an extant CFC... use that CFC's deserialize() method. Or it could look for an object which contains a type and a method, or whatever. But Adobe can specify a contract there.
    The only place I see a centralised CFC being relevant here is for a mechanism for handling serialised data that is neither a ColdFusion internal type, nor identifiable as above. In this case, perhaps they could provide a mechanism for a serialisation router, which basically has a bunch of routes (if/elseifs if need be) which contains logic as to how to work out how to deserialise the data. But it should not be the actual deserialiser, it should simply have the mechanism to find out how to do it. This is actually pretty much the same in operation as the deserialize() approach in the current implementation, but it doesn't need the canDeserialize() method (it can return false at the end of the routing), and it doesn't need to know about serialising. And also it's not the main mechanism to do the deserialisation, it's just the fall back if the prescribed approach hasn't been used.
    TBH, this still sounds a bit jerry-built, and I'm open for better suggestions. This is probably a well-trod subject in other languages, so it might be worth looking at how the likes of Groovy, Ruby or even PHP (eek!) achieve this.
    There's still another issue with the current approach. And this demonstrates that the Adobe guys don't actually work with either CFML applications or even modern websites. This approach only works for a single, stand-alone website (like how we might have done in 2001). What if I'm not in the business of building websites, but I build applications such as FW/1 or ColdBox or the like? Or any sort of "helper" application. They cannot use the current Adobe implementation of the customserializer. Why? Because the serialisation code needs to be in a website-specific CFC. There's no way for Luis to implement a custom serialiser in ColdBox (for example), and then have it work for someone using ColdBox. Because it relies on either editing Application.cfc to specify a different CFC, or editing the existing customSerializer CFC. Neither of which are very good solutions. This should have been immediately apparent to the Adobe engineer(s) implementing this stuff had they actually had any experience with modern web applications (which generally aren't just a single monolithic site, but an aggregation of various other sub applications). Equally, I know it's not a case of having thought about this and [I'm just missing something], because when I asked them the other day, at first they didn't even get what I was asking, but when I clarified were just like "oh yeah... um... err... yeah, you can't do that. We'll... have to... ah yeah". This has been raised as bug 3750731.
    So I declare the intent here valid, but the implementation to be more alpha- / pre-release- quality, not release-ready.
    Still: it could be easily deprecated and rework fairly easily. I've raised this as bug 3750732.
    Or am I missing something?
    Adam

    Yes, you can easily add additional questions to the Lookup.WebClient.Questions Lookup to allow some additional choices. We have added quite a few additional choices, we have noticed that removing them once people have selected them causes some errors.
    You can also customize the required number of questions to select when each user sets them up as well as the number required to be correct to reset the password, these options are in the System Configuration settings.
    If you need multi-language versions of the questions, you will also need to modify the appropriate language resource file in the xlWebApp.war file to provide the necessary translations for the values entered into the Lookup.

  • Can I force JDBC Driver use UTF-8 Charset to encode?

    The similiar way is in MySQL, like
    jdbc:mysql://localhost:3306/test?useUnicode=true&amp;characterEncoding=UTF-8
    Thanks,

    You must describe your requirements in more details. There is generally nothing special in reading/writing into a database that has UTF-8 (AL32UTF8) as its database character set. Data is read into/written from String variables, which are encoded in UTF-16 by Java design. JDBC transparently converts between UTF-16 and UTF-8.
    If you want to output a string into a file in UTF-8 encoding, it is no longer an Oracle problem but a normal Java programming issue. You need to create an appropriate OutputStreamWriter for your FileOutputStream.
    new OutputStreamWriter( new FileOutputStream(file), "UTF-8" );
    -- Sergiusz

  • BUG?? UTF-8 non-Latin database chars in IR csv export file not export right

    Hello,
    i have this issue: my database character set is UTF-8 (AL32UTF8) and contains data in a table used in IR that are Greek (non-Latin). While i can see them displayed correctly in IR and also via select / in Object Browser in SQL Workshop when i try to Download as csv the produced csv does not have the Greek characters exported correctly, while the Latin ones are ok.
    This problem is the same if i try IE or Firefox. Also the export in HTML works successfully and i see the Greek characters there correctly!
    Is there any issue with UTF-8 and non-Latin characters in export to csv from IRs ? Can someone confirm this, or has a similar export problem with UTF-8 DB and non-Latin characters ?
    How could i solve this issue ?
    TIA

    Hello Joel,
    thanks for taking the time to answer to my Issue. Well this does not work for my case as the source of data (Database character set) is UTF-8. The Data inside the database that are shown in the IR on the Screen is UTF-8 and this is done correctly. You can see this in my example. The actual Data in the Database are from multiple languages, English, Greek, German, Bulgarian etc that's why i selected the UTF-8 character set when implementing the Database and this requirement was for all character data. Also the suggested character set from Oracle is Unicode when you create a Database and you have to support data from multiple languages.
    What is the requirement, is that what i see in the IR (i mean in Display) i need to export in CSV file correctly and this is what i expect from the Download as CSV feature to achieve. I understand that you had in mind Excel when implementing this feature but a CSV is just an easy way to export the Data - a Comma Separated Values file, not necessarily to open them directly in Excel. Also i want to add here that in Excel you can import the Data in UTF-8 encoding when importing from CSV, which is fine for my customer. Also Excel 2008 and later understands a UTF-8 CSV file if you have placed the UTF-8 BOM character at the start of the file (well, it drops you to the wizzard, but it's almost the same as importing).
    Since the feature you describe and if i understood correctly is creating always an ANSI encoded file in every case, even when the Database character set is UTF-8, it is impossible to export correctly if i have data that are neither in Latin, not in the other 128 country specific characters i choose in Globalization attributes and these data is that i see in Display and need to export to CSV. I believe that this feature in case the Database character set is UTF-8 should create a CSV file that is UTF-8 encoded and export correctly what i see i the screen and i suspect that others would also expect this behaviour. Or at least you can allow/implement(?) this behaviour when Automatic CSV encoding is set to No. But i stongly believe - and especially from the eyes of a user - to have different things in screen and in the depicted CSV file is a bug, not a feature.
    I would like to have comments on this from other people here too.
    Dionyssis

  • Synching and re-encoding Apple Lossless tracks

    Hi,
    I'm a brand new apple customer and first time poster - bought my first ipod recently - 80gb version. Anyhow, am brand new to ipod and itunes. To kick off my relationship I ripped about 200 of my favourite CDs using apple lossless. Now, it's all wonderful and being a bit of an audiophile(i'm not really that smug!), it sounds pretty good. So, I'm surprised to find that it really chews batterly life. Three hours or so is all I can get. This brings me onto my question;
    Is it possible to resize a track when synching. I ask because in the past I've used Windows Media Player with a Sony Ericsson Walkman phone and I could resize and re-encode tracks as I transfered. I've searched all that I could and there's nothing glaring at me to suggest that I can or cannot so I'm a little unsure. Perhaps I'm missing something.
    Anyhow, hope someone can advise....
    Thanks
    VR

    Is it possible to resize a track when synching.
    Only with the iPod Shuffle.
    And you are correct in resizing the songs will most likely cure the problem.
    The iPod has a buffer of ~32MB, which is about 8 songs in AAC 128 kbps. With Apple lossles, it's about 1 song.
    If a song is longer than the buffer, the HD will spin continuously until it will all fit into the buffer. Or spin when it needs to load more data. In your case, it will likely spin up for every song.

  • Thx for the help today - one more question

    Thx for all the help today. My flash works perfect now. One
    more question... I get a frame around my flash when using internet
    explorer and have the mouse over it... why?
    check it out:
    http://www.ardent.se

    search google and this forum for "active content" - been
    front page news for weeks - hundreds of
    discussions, blogs, articles all over the web.
    --> Adobe Certified Expert (ACE)
    --> www.mudbubble.com :: www.keyframer.com
    -->
    http://flashmx2004.com/forums/index.php?
    -->
    http://www.macromedia.com/devnet/flash/articles/animation_guide.html
    -->
    http://groups.google.com/advanced_group_search?q=group:*flash*&hl=en&lr=&ie=UTF-8&oe=UTF-8
    cjh81 wrote:
    > Thx for all the help today. My flash works perfect now.
    One more question... I
    > get a frame around my flash when using internet explorer
    and have the mouse
    > over it... why?
    >
    > check it out:
    http://www.ardent.se
    >

  • Locale and character encoding. What to do about these dreadful ÅÄÖ??

    It's time for me to get it into my head how this works. Please, help me understand before I go nuts.
    I'm from Sweden and we use a few of these weird characters like ÅÄÖ.
    If I create a file called "övrigt.txt" in windows, then the file will turn up as "?vrigt.txt" on my Linux pc (At least in the console, sometimes it looks ok in other apps in X). The same is true if I create the file in Linux and copy it to Windows, it will look just as weird on the other side.
    As I (probably) can't change the way windows works, my question is what I have to do to have these two systems play nicely with eachother?
    This is the output from locale:
    LANG=en_US.utf8
    LC_CTYPE="en_US.utf8"
    LC_NUMERIC="en_US.utf8"
    LC_TIME="en_US.utf8"
    LC_COLLATE=C
    LC_MONETARY="en_US.utf8"
    LC_MESSAGES="en_US.utf8"
    LC_PAPER="en_US.utf8"
    LC_NAME="en_US.utf8"
    LC_ADDRESS="en_US.utf8"
    LC_TELEPHONE="en_US.utf8"
    LC_MEASUREMENT="en_US.utf8"
    LC_IDENTIFICATION="en_US.utf8"
    LC_ALL=
    Is there anything here I should change? I have tried using ISO-8859-1 with no luck. Mind you that I want to have the system wide language set to english. The only thing I want to achieve is that "Ö" on widows should turn up as "Ö" i Linux as well, and vice versa.
    Please save my hair from being torn off, I'm going bald here...

    Hey, thanks for all the answers!
    I share my files in a number of ways, but mainly trough a web application called Ajaxplorer (very nice btw...). The thing is that as soon as a windows user uploads anything with special chatacters in the file name my programs, xbmc, console etc, refuses to read them correctly. Other ways of sharing is through file copying with usb sticks, ssh etc. It's really not the way of sharing that is the problem I think, but rather the special characters being used sometimes.
    I could probably convert the filenames with suggested applications but then I'll set the windows users in trouble when they want to download them again, won't I?
    I realize that it's cp1252 that is the bad guy in this drama. Is there no way to set/use cp1252 as a character encoding in Linux? It's probably a bad idea as utf8 seems like the future way to go, but the fact that these two OS's can't communicate too well in this area is pretty useless if you ask me.
    To wrap this up I'll answer some questions...
    @EVRAMP: I'm actually using pcmanfm, but that is only for me and I'm not dealing very often with vfat partitions to be honest.
    @pkervien: Well, I think I mentioned my forms of sharing above. (kul med lite arch-svenskar!)
    @quarkup: locale.gen is edited and both sv.SE and en_US have utf-8 and ISO-8859 enabled and generated.
    ...and to clearify things even further. It doesn't matter if I get or provide a file via a usb stick, samba, ftp or by paper. All I want is for "Ö" to always be "Ö", everywhere.
    I can't believe how hard this is to get around. Linus is finish for crying out loud. I thought he'd sorted this out the first thing he did. Maybe he doesn't deal with windows or their users at all

  • Questions about Lumia Line and Windows Phone 8

    hey , i am using Nokia 701, a Symbian smarphone i love very much, but would like to know more about the WP OS and how it fares against it. If u have answers for the questions below or even better - if u own a Lumia 720, which i find interesting, please give ur opinions.
    1. There are many GPS Apps in Android which still havent been ported to WP8. Nokia 701 has , in its native browser, GPS functionality, so when i enter GPS enabled websites, I can share my location to them on my Nokia 701, and also using the Opera Mobile browser.
    Does the Lumia 720 have GPS enabled BROWSERS ? 
    2. Second question - the thing i LOVE about symbian is that Whatsapp is always connected and so - messages are immadietly delivered, faster than android even, i believe. 
    If u keep internet connection ON with the WP device and  a friend sitting next to you will message you over WhatsApp, how fast will it reach,
    when the app is closed and when the app is in the background?
    And is there any difference between having the screen locked or unlocked ?
    I hate it when there's a delay in recieving messages. It happens with android and some iPhones. 
    3. The Lumia 720 has 512 mb of RAM. Does it limit downloading of apps, meaning you simply cant download them because they cant work on the device? 
    And do you feel LAGS with apps who should work with your 720?
    I'd love hearing all of your thoughts. This will help not only me but other users who read this in time to come and maybe even Nokia when they perform R & D .  

    rgkline4 wrote
    Phone also monitors your movements, usage, and sends to the owner -Microsoft. You don't own the phone or controls to privacy, Microsoft claims to do that for you. It also uploads your device contacts somewhere once you open an account in order to use the phone and the WP8.1 OS. THINK TWICE
    lol.. Try again, every app which wants to access location or other data needs to get permission from you to do so. Also you have the choice to allow the phone to send information to Microsoft during the first set up of the phone..
    Click on the blue Star Icon below if my advice has helped you or press the 'Accept As Solution' link if I solved your problem..

  • Which Mac Pro? More cores=slower speeds? And most of us know the speed matters or FPU for music and I don't understand the faster is for the least amount of procs. And while I get the whole rendering thing and why it makes sense.

    Which Mac Pro? More cores=slower speeds? And most of us know the speed matters or FPU for music and I don't understand the faster is for the least amount of procs. And while I get the whole rendering thing and why it makes sense.
    The above is what the bar says. It's been a while and wondered, maybe Apple changed the format for forums. Then got this nice big blank canvas to air my concerns. Went to school for Computer Science, BSEE, even worked at Analog Devices in Newton Massachusetts, where they make something for apple. 
    The bottom line is fast CPU = more FPU = more headroom and still can't figure out why the more cores= the slower it gets unless it's to get us in to a 6 core then come out with faster cores down the road or a newer Mac that uses the GPU. Also. Few. I'm the guy who said a few years ago Mac has an FCP that looks like iMovie on Steroids. Having said that I called the campus one day to ask them something and while I used to work for Apple, I think she thought I still did as she asked me, "HOW ARE THE 32 CORES/1DYE COMING ALONG? Not wanting to embarrass her I said fine, fine and then hung up.  Makes the most sense as I never quite got the 2,6,12 cores when for years everything from memory to CPU's have been, in sets of 2 to the 2nd power.  2,4,8,16,32,64,120,256,512, 1024, 2048,4196,8192, 72,768.  Wow. W-O-W and will be using whatever I get with Apollo Quad. 
    Peace to all and hope someone can point us in THE RIGHT DIRECTION.  THANK YOU

    Thanks for your reply via email/msg. He wrote:
    If you are interested in the actual design data for the Xeon processor, go to the Intel site and the actual CPU part numbers are:
    Xeon 4 core - E5.1620v2
    Xeon 6 core - E5.1650v2
    Xeon 8 core - E5.1680v2
    Xeon 12 core - E5.2697v2
    I read that the CPU is easy to swap out but am sure something goes wrong at a certain point - even if solderedon they make material to absorb the solder, making your work area VERY clean.
    My Question now is this, get an 8 core, then replace with 2 3.7 QUAD CHIPS, what would happen?
    I also noticed that the 8 core Mac Pro is 3.0 when in fact they do have a 3.4 8 core chip, so 2 =16? Or if correct, wouldn't you be able to replace a QUAD CHIP WITH THAT?  I;M SURE THEY ARE UO TO SOMETHING AS 1) WE HAVE SEEN NO AUDIO FPU OR PERHAPS I SHOULD CHECK OUT PC MAKERS WINDOWS machines for Sisoft Sandra "B-E-N-C-H-M-A-R-K-S" -
    SOMETHINGS UP AND AM SURE WE'LL ALL BE PLEASED, AS the mac pro      was announced Last year, barely made the December mark, then pushed to January, then February and now April.
    Would rather wait and have it done correct than released to early only to have it benchmarked in audio and found to be slower in a few areas- - - the logical part of my brain is wondering what else I would have to swap out as I am sure it would run, and fine for a while, then, poof....
    PEACE===AM SURE APPLE WILL BLOW US AWAY - they have to figure out how to increase the power for 150 watts or make the GPU work which in regard to FPU, I thought was NVIDIA?

Maybe you are looking for

  • Best practice for exporting a dps folio so a third party can work on it?

    Hi All, Keen for some thoughts on the best practice for exporting a dps folio, indd files, links and all, so a third party can carry on the work. Is their a better alternative to packaging each individual indd file and sharing the folio through adobe

  • Different types of function module

    hi, When we create a function module : in one of the tabs we can find   1 general function module   2. remote function module 3. update function module. What is meant by update function module and remote function module. can anyone explain me with an

  • How do I get Firefox to open in 32-bit mode every time?

    Often, I will navigate to a site or page within a site and I get the message "restart in 32-bit mode." This is frustrating because once restarted, I'm taken to my browser homepage. I then am required to re-navigate back to where I was when I got the

  • Issue qty vs receipt qty

    Also can some one explain me the difference between these 2 quantities . And what does these below objects mean: Receipt Quantity - Stock in Transit Issue Quantity - Stock in Transit Receipt Quantity - Total Stock      Issue Quantity - Total Stock Is

  • How to retore the address bar.

    the address bar no longer appears in the opening window.