I18n and UTF8

Hello.
I have a problem with a web application that handles 3 languages (English, Spanish and Portuguese). The web pages are showed correctly using the fmt:message tag library. But, one of the application features is to send email notifications, and in the email text the characters that do not bellow to English language are showed incorrectly.
I have the text in a .properties file, this is the value in the file:
--- Por favor no responda a esta direcci{color:#000000}\u00F3{color}n de correo electr\u00F3nico, este mensaje es enviado autom\u00E1ticamente por el sistema ---
And this is how it showed in the email text:
--- Por favor no responda a esta direcci{color:#ff0000}?{color}n de correo electr{color:#ff0000}?{color}nico, este mensaje es enviado autom{color:#ff0000}?{color}ticamente por el sistema ---
The correct message should be:
--- Por favor no responda a esta direcci{color:#ff0000}ó{color}n de correo electr{color:#ff0000}ó{color}nico, este mensaje es enviado autom{color:#ff0000}á{color}ticamente por el sistema ---
We are using the getMessage method of the org.springframework.context.MessageSource class to "translate" the key value.
We have one developing environment that is on a Windows server and one production environment that is on a Solaris server. The error occurs in the production environment but in the developing enviroment does not. Both environment runs Tomcat 5.0.28. On developing server we have java 1.4.2_14 and 1.4.2_17 on production server, Solaris version is 2.9 and runs on a sun sparc.
I think that the problem could be in the Java configuration instance of the source code, maybe a environment variable or some Java property which should have a different value.
Thanks for the help you can give me... if you need more information, just let me know.
Regards
P.S: I am not a good English writer, I did my best hoping you can understand the problem.
Edited by: carlos.bracho on Apr 15, 2008 12:09 PM
Edited by: carlos.bracho on Apr 15, 2008 12:11 PM

carlos.bracho wrote:
If that was the problem, why the message is sent correctly in the windows environment and in the solaris environment is not? Perhaps in your Windows environment the something.properties file is encoded in ISO-8859-1 (as it is required to be) but in the Solaris environment it is encoded some other way?
Although I would expect it to be correct in Solaris and wrong in Windows if that sort of problem existed. Have you done anything to test what data is being sent? Right now you have two steps:
1. Get data from properties
2. Send data via e-mail
and you are trying to test those two steps in a single test. Break them apart and test them separately.
Edit: and are both of those servers connecting to the same SMTP server, or is that different as well?

Similar Messages

  • How to detect encoding file in ANSI, UTF8 and UTF8 without BOM

    Hi all,
    I am having a problem with detecting a .txt/.csv file encoding. I need to detect a file in ANSI, UTF8 and UTF8 without BOM but the problem is the encoding of ANSI and UTF8 without BOM are the same. I checked the function below and saw that ANSI and UTF8
    without BOM have the same encoding. so, How can I detect UTF8 without BOM encoding file? because I need to handle for this case in my code.
    Thanks.
    public Encoding GetFileEncoding(string srcFile)
                // *** Use Default of Encoding.Default (Ansi CodePage)
                Encoding enc = Encoding.Default;
                // *** Detect byte order mark if any - otherwise assume default
                byte[] buffer = new byte[10];
                FileStream file = new FileStream(srcFile, FileMode.Open);
                file.Read(buffer, 0, 10);
                file.Close();
                if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2] == 0xbf)
                    enc = Encoding.UTF8;
                else if (buffer[0] == 0xfe && buffer[1] == 0xff)
                    enc = Encoding.Unicode;
                else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2] == 0xfe && buffer[3] == 0xff)
                    enc = Encoding.UTF32;
                else if (buffer[0] == 0x2b && buffer[1] == 0x2f && buffer[2] == 0x76)
                    enc = Encoding.UTF7;
                else if (buffer[0] == 0xFE && buffer[1] == 0xFF)
                    // 1201 unicodeFFFE Unicode (Big-Endian)
                    enc = Encoding.GetEncoding(1201);
                else if (buffer[0] == 0xFF && buffer[1] == 0xFE)
                    // 1200 utf-16 Unicode
                    enc = Encoding.GetEncoding(1200);
                return enc;

    what you want is to get the encoding utf-8 without bom which can only be detected if the file has special characters, so do the following:
    public Encoding GetFileEncoding(string srcFile)
                // *** Use Default of Encoding.Default (Ansi CodePage)
                Encoding enc = Encoding.Default;
                // *** Detect byte order mark if any - otherwise assume default
                byte[] buffer = new byte[10];
                FileStream file = new FileStream(srcFile, FileMode.Open);
                file.Read(buffer, 0, 10);
                file.Close();
                if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2]
    == 0xbf)
                    enc = Encoding.UTF8;
                else if (buffer[0] == 0xfe && buffer[1] == 0xff)
                    enc = Encoding.Unicode;
                else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2]
    == 0xfe && buffer[3] == 0xff)
                    enc = Encoding.UTF32;
                else if (buffer[0] == 0x2b && buffer[1] == 0x2f &&
    buffer[2] == 0x76)
                    enc = Encoding.UTF7;
                else if (buffer[0] == 0xFE && buffer[1] == 0xFF)
                    // 1201 unicodeFFFE Unicode (Big-Endian)
                    enc = Encoding.GetEncoding(1201);
                else if (buffer[0] == 0xFF && buffer[1] == 0xFE)
                    // 1200 utf-16 Unicode
                    enc = Encoding.GetEncoding(1200);
               else if (validatUtf8whitBOM(srcFile))
                    enc = UTF8Encoding(false);
                return enc;
    private bool validateUtf8whitBOM(string FileSource)
                bool bReturn = false;
                string TextUTF8 = "", TextANSI = "";
                //lread the file as utf8
               StreamReader srFileWhitBOM  = new StreamReader(FileSource);
               TextUTF8 = srFileWhitBOM .ReadToEnd();
               srFileWhitBOM .Close();
                //lread the file as  ANSI
               StreamReader srFileWhitBOM  = new StreamReader(FileSource,Encoding.Defaul,false);
               TextANSI  = srFileWhitBOM .ReadToEnd();
               srFileWhitBOM .Close();
               // if the file contains special characters is UTF8 text read ansi show signs
                if(TextANSI.Contains("Ã") || TextANSI.Contains("±")
                     bReturn = true;
                return bReturn;

  • Report Script and UTF8 in 9.3.1

    Hi guys, I have an issue with report scripts and UTF8 in 9.3.1.
    Basically my problem is that if I run the RS from EAS, the output is a txt file in UTF8.
    If I run it through Esscmd, the output is a ANSI txt.
    I need to info to load a non UTF Cube.
    I use English/Latin1 locale, the same settings I used in 9.2. In fact is the same RS, but now it acts weird.
    I have this problem in 2 different clients using 9.3.1.
    Any suggestion?
    The Essbases servers are Non unicode

    Thanks, I know that they should export to the local format of the client. That is the weird stuff, because, as I understand the ouput should be the same if I run it through EAS and if I use Esscmd from the same machine.
    When I used MaxL it also export in UTF8. I have only tried MaxL through EAS, so I don't know if I run it through the command line if it will export in UTF8.
    We use Non unicode basically because we have had some problems with Planning and some character set in previous releases and to keep it simple.

  • Discoverer Viewer and UTF8

    Dear friends
    I have problem with Discoverer Viewer - version 9.0.2.39.02 and UTF8.My infra has a UTF8 and my App server is too, but when i run a query on Discoverer Viewer (web) i see the header of table with WE8MSWIN1252 then it's not readable!!! How can i change this nls_lang to UTF8 or how can i solve this problem.
    thanks alot.

    Hi,
    Were you able to find the solution to this problem? I have installed 9ias successfully. I could log to OEM and created a public connection but I cannot go to
    http://hostname.domain:7779/discoverer/viewer
    I get an error "Internal Server Error".
    Any help is welcome
    Vani

  • Discussion Forum Portlet - Problems with JAVA and UTF8?

    Hi
    I installed the Discussion Forum Portlet successfully. It also seems that almost everything works fine. There's only a problem if I have new posts that include special German characters (Umlaute) like ä, ö, ü or special French characters like é, è or ç. They are saved correctly in the table but if you view the post the characters are not displayed correctly.
    Example
    input: ça va?
    result: ça va?
    I know that there are problems with Java and UTF8 Database. Is there a possibility to change this (bug)?
    Regards
    Mark

    Here's what I got. I don't see anything that helps but I'm kinda new to using SQL and java together.
    D:\javatemp\viddb>java videodb
    Problem with SQL java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver
    ] Syntax error in CREATE TABLE statement.
    Driver Error Number-3551
    java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Syntax error in
    CREATE TABLE statement.
    at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source)
    at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source)
    at sun.jdbc.odbc.JdbcOdbc.SQLExecDirect(Unknown Source)
    at sun.jdbc.odbc.JdbcOdbcStatement.execute(Unknown Source)
    at sun.jdbc.odbc.JdbcOdbcStatement.executeUpdate(Unknown Source)
    at videodb.main(videodb.java:31)
    D:\javatemp\viddb>

  • AL32UTF8 and UTF8

    On Oracle 8i I was using the utf-8 charset, when I'm installing Oracle 9.2 I can find AL32UTF8 and utf8 without 4 byte chars
    Which one should I use, what is the equivalent to the 8i utf-8
    Tal Olier (otal_mercury.co.il)

    The UTF-8 character set is available in both 8i & 9i. It's the same character set in both.
    UTF-8 does not have 4 byte characters-- characters in UTF-8 require between 1 and 3 bytes of storage. UTF-16 is, IIRC, a new character set in 9i where characters are stored in either 2 or 4 bytes.
    Justin

  • I18N and Preferences

    G'day folks,
    We are developing a client/server application and we are not having any problems (yet) with the ResourceBundle approach to locale handling.
    However, the Java Tutorial recommends using the Preferences API in place of Properties. As ResourceBundles are really just a convenience API wrapped around Properties, I was wondering what's the best approach for implementing I18N and L10N using Preferences.
    Has anybody tried this? Any pointers would be much appreciated. Ciao.

    I think that the preferences API may be a sensible approach to storing a user's preferred locale setting...however, it doesn't replace ResourceBundles for translated, localized resources. You can and should continue to use ResourceBundles for localized resources.

  • ArchLinux and UTF8

    It is not the best place for this post but since we have to start from the kernel....
    I tried to send and e-mail about this to the mailing list and I receved no answer so lets try by the forum.
    Basically I started to think about that since samba 3.0 (that use UTF8 as
    default) came out.
    By looking at Archlinux:
    Kernel NLS option is compiled with iso8859-1 (western european)
    Kernel Remote filesystem NLS use cp437 (united state)
    There is no  iso10496/utf8 fonts in the system.
    An interesting step for arch should be moving to utf8.
    Opinion??

    Basically is this.
    Normally you can't visualize all the possible chars in the world.
    In fact you chose the "type of char" you want to see/support on your system.
    For instance if you use iso8859-1 charset you can visualize accetend letters like òàùèìéç etc. basically the ones you find in western european language (like Italian French German Spanish Portugues....) but if you try to visualize something in Russian Bulgarian Rumenian etc. there are different kind of letters/accent and instead of the correct char you probably see question mark under windows or  strange char with linux. Imagine if you try to read a japanese document.
    This limitation of "chars quantity"  was due to the necessity to use only 8 bit for  the coding in order to save space in memory and hd.
    Now modern computers have huge ammount of memory and so UTF8 came out.
    Basically this standard provide the coding for any chars in the planet. By setting this on your computer and programs you are able to read any kin of document with any coding. Offcourse not everybody need it but anyway all the system (operating systems and programs) are moving to this standard.

  • I18N and jar files

    We have a site fully i18n'd. My question is this: how can we separate the .properties files from the jar file so we do not have to update the jar file for a static text change?
    I've searched the forums and haven't benn able to find a solution.
    Any help would be appreciated.
    Thanks, Joe

    you can put your properties file anywhere in the classpath.
    Most likely you classpath consists of
    - one or more jar file
    - one or more directories
    just move the properties from the jar file into on of the directories (in the proper subdirectory according to the package name of course) and it should work
    regards
    Spieler

  • Internationalisation ServletFilter and UTF8 Character-Encoding

    Hello to all
    I use a non common but nice way to internationalize my web-application.
    I use a ServletFilter to replace the text-keys. So static-resources stays static-resources that can be cached and don't need to be translated each time they are requested.
    But there is a little problem to get it working with utf-8.
    In my opinion there is only one way to read the response-content. I have to use a own HttpServletResponseWrapper like recommended under [http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Servlets8.html#82361].
    If I do so it is no more possible to use ServletQutputStream (wrapper.getOutputStream()) to write back the modified/internationalized content (e.g. with german umlauts/Umlaute) back to the response in the right encoding.
    Writer out = new BufferedWriter(new OutputStreamWriter(wrapper.getOutputStream(), "UTF8"));Using PrintWriter for writing does not work. Because umlauts are send in the wrong encoding (iso-8859-1). With the network-sniffer Wireshark I've seen that � comes as FC that means as iso-8859-1 encoded character.
    It obviously uses the plattforms default-encoding although the documentation does not mentions this explictly for the Constructor ([PrintWriter(java.io.Writer,boolean)|http://java.sun.com/j2se/1.4.2/docs/api/java/io/PrintWriter.html#PrintWriter(java.io.Writer,%20boolean)]).
    So my questions:
    1. Is there a way to get response-content without loosing option to call wrapper.getOutputStream().
    or
    2. can I set the encoding for my printwriter
    or
    3. can I encode the content before writing it to the printwriter and will this solve the problem
    new String(Charset.forName("UTF8").encode(content).array(), "UTF8") did not work.
    Here comes my code:
    The Filter to tanslate the resources/response:
    import java.io.IOException;
    import java.io.PrintWriter;
    import javax.servlet.Filter;
    import javax.servlet.FilterChain;
    import javax.servlet.FilterConfig;
    import javax.servlet.ServletException;
    import javax.servlet.ServletRequest;
    import javax.servlet.ServletResponse;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    import de.modima.util.lang.Language;
    public class TranslationFilter implements Filter
         private static final Log log = LogFactory.getLog(TranslationFilter.class);
         public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException
              String lang = Language.setLanguage((HttpServletRequest) request);
              CharResponseWrapper wrapper = new CharResponseWrapper((HttpServletResponse)response, "UTF8");
              PrintWriter out = response.getWriter();
              chain.doFilter(request, wrapper);
              String content = wrapper.toString();
              content = Language.translateContent(content, lang);
              content += "                                                                                  ";
              wrapper.setContentLength(content.length());
              out.write(content);
              out.flush();
              out.close();
         public void destroy(){}
         public void init(FilterConfig filterconfig) throws ServletException{}
         }The response-wrapper to get acces to the response content:
    import java.io.CharArrayWriter;
    import java.io.PrintWriter;
    import javax.servlet.http.HttpServletResponse;
    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    public class CharResponseWrapper extends TypedSevletResponse
         private static final Log log = LogFactory.getLog(CharResponseWrapper.class);
         private CharArrayWriter output;
         public String toString()
              return output.toString();
         public CharResponseWrapper(HttpServletResponse response, String charsetName)
              super(response, charsetName);
              output = new CharArrayWriter();
         public PrintWriter getWriter()
              return new PrintWriter(output, true);
         }The TypedResponse that takes care for setting the right http-header informations according to the given charset:
    import java.nio.charset.Charset;
    import java.util.StringTokenizer;
    import javax.servlet.http.HttpServletResponse;
    import javax.servlet.http.HttpServletResponseWrapper;
    public class TypedSevletResponse extends HttpServletResponseWrapper
         private String type;
         private String charsetName;
          * @param response
          * @param charsetName the java or non-java name of the charset like utf-8
         public TypedSevletResponse(HttpServletResponse response, String charsetName)
              super(response);
              this.charsetName = charsetName;
         public void setContentType(String type)
              if (this.type==null && type!=null)
                   StringTokenizer st=new StringTokenizer(type,";");
                   type=st.hasMoreTokens()?st.nextToken():"text/html";
                   type+="; charset="+getCharset().name();
                   this.type=type;
              getResponse().setContentType(this.type);
       public String getContentType()
          return type;
       public String getCharacterEncoding()
              try
                   return getCharset().name();
              catch (Exception e)
                   return super.getCharacterEncoding();
         protected Charset getCharset()
              return Charset.forName(charsetName);
         }Some informations about the enviroment:
    OS: Linux Debian 2.6.18-5-amd64
    Java: IBMJava2-amd64-142
    Apserver: JBoss 3.2.3
    Regards
    Markus Liebschner
    Edited by: MaLie on 30.04.2008 11:52
    Edited by: MaLie on 30.04.2008 11:54 ... typo
    Edited by: MaLie on 30.04.2008 12:04

    Hello cndvg
    yes I did.
    I found the solution in this forum at [Filter inconsistency Windows-Solaris?|http://forum.java.sun.com/thread.jspa?threadID=520067&messageID=2518948]
    You have to use a own implementation of ServletOutputStream.
    public class TypedServletOutputStream extends ServletOutputStream
         CharArrayWriter buffer;
         public TypedServletOutputStream(CharArrayWriter aCharArrayWriter)
              super();
              buffer = aCharArrayWriter;
         public void write(int aInt)
              buffer.write(aInt);
         }Now the CharResponseWrapper looks like this.
    public class CharResponseWrapper extends TypedSevletResponse
         private static final Log log = LogFactory.getLog(CharResponseWrapper.class);
         private CharArrayWriter output;
         public String toString()
              return output.toString();
         public CharResponseWrapper(HttpServletResponse response, String charsetName)
              super(response, charsetName);
              output = new CharArrayWriter();
         public PrintWriter getWriter() throws IOException
              return new PrintWriter(output,true);
         public ServletOutputStream getOutputStream()
              return new TypedServletOutputStream(output);
         }Regards
    MaLie

  • URL Encoding and UTF8

    Hello Friends,
    I have been working on this problem for some time now. I have a web page that has forms fields like first name, last name etc. and then posts the data back to a servlet that writes it into a file that is utf-8 encoded. The web page has charset specified as utf-8. I was assuming that data would be sent back in the url as utf-8. I entered some japanese data copied from a japanese web page. When I looked at the file into which the form data was written I saw that it wasn't even close to utf-8 or unicode encoding of the data I had posted. For eg. I entered a japanese character: '\u4F1A' which is utf-8 encoded as 'E4 BC 9A'. But the data written into the file is '25 45 34 25 42 43 25 39 41'. This is of course because the data is url encoded which is '%E4%BC%9A', since 45 is hex for E, 34 is hex for 4 and so on. Now the question is how is it that I proceed to get the right utf data back. I am using a jrun server. Is there something that I need to set here in order to get the right characters back.

    The UTF-8 encoded sequence of bytes of the character '\u4F1A' is {-E4, -BC, -9A}, and the character '-' is (char)0x2d instead of (char)0x25. And I suppose 25 should be 2d.

  • Oracle9iDS and UTF8

    I develop form9i on windows 2000 platorm, when I connect US7ASCII character database
    work fine but connect UTF8 database I got
    FRM-92100 your connection to server was interrupted
    Detail: Java Exception
    java.io.DataInputStream.readUTF(Unknown source)
    Any suggestion

    What locale does your FormsServer use? If you run it using ASCII7 the data in your UTF8 DB might cause the error.
    Compile your forms with the appropiate NLS_LANG in force.
    Hope that helps

  • ODBC and UTF8 charset

    Win7 x32
    Oracle XE 11g
    How to set a UTF8 charset?
    My script is connecting via PDO ODBC: Driver={Oracle in XE};Dbq=mydb;Uid=user;Pwd=psw;
    The registry NLS_LANG is set to AMERICAN_AMERICA.AL32UTF8
    Windows locale is set to US.
    When I insert word "тест" it looks like "тест" in DB.

    Looks like you have a presentation problem. Version of SQL Developer used?I use v3.1.07 but actually there isn't problem with the SQL developer tool because when I use it for seeing, inserting/updating the data it works correctly. There is a problem when I try to insert or update (select ok) from a php script. I update a table from php script (which is utf8 encoded) using PDO ODBC driver but it saves the data in windows 1251. If I encode the php script to windows 1251 than it saves it in utf8. Is it possible that ODBC driver converts the charset while inserting or updating method?
    PS: Before all I migrated all data from MySQL DB (which was utf8 encoded) to Oracle DB using SQL developer tool. I suppose it was migrated without converting the charset.

  • I18N and dataprovider

    Hello,
    I'm trying to implement the internationlization for an application. The language should be changed at runtime that's why I use the resourceManger functionality of flex. My problem is that some of our app widgets liek comoboxes are using dataprovider in order to be filled. If I change the language these widgets are not updated. Here is some sample code:
    <mx:ComboBox id="combo" dataProvider="{comboArray}"/>
    the dataprovider is initialised in the onApplicationComplete method of the view
    comboArray = [{label:resourceManager.getString('language', 'form.value1.label'), data:"test1"},
    {label:resourceManager.getString('language', 'form.value2.label'), data:"test2"}];
    Can anybody give me a hint how to solve this issue or best practices?
    Regards,

    Hi mwoodpecker,
    When you are changing the language of the application...try to rebuild the comboArray array and reset the dataprovider for your
    ComboBox.
    This will resolve the problem.
    Thanks,
    Bhasker

  • I18N and netui-template:setAttribute ..

    Hi,
    In my jsp I am using setAttribute tag as follows
    <b><netui-template:setAttribute name="title" value="My Page title"></netui-template:setAttribute>
    </b>
    I want to read the "value" attribute from a Application Resource.
    So I did the following:
    <i> <netui-template:setAttribute name="title" value="<bean:message key="my.value.title"/>"></netui-template:setAttribute></i>
    But it does not compile, wondering if you can help.
    Thanks in advance
    Jaan

    You can also decalre the bundle in your JSP, when you don't use a pageflow:
    JSP:
    <netui-data:declareBundle name="labels" bundlePath="bundle.test" />
    Submit Button:     <netui:button value="{bundle.labels.submit}"/>
    Cancel Button:     <netui:button value="{bundle.labels.cancel}"/>
    Bundle file:
    submit=Submit
    cancel=Reset
    Thomas Cook wrote:
    If you're using page flows you can do the following...
    In your page flow in the class comment:
    @jpf:message-resources resources="messages"
    In your JSP:
    <netui:label value="{bundle.default.nameLabel}"/>
    In your /WEB-INF/classes/messages.properties file:
    nameLabel=Name
    Alternately, if you're using the "key" attribute on your
    message-resources annotation you'd do the following...
    In your page flow in the class comment:
    @jpf:message-resources key="foo" resources="messages"
    In your JSP:
    <netui:label value="{bundle['foo/jpfDirectory'].nameLabel}"/>
    In your /WEB-INF/classes/messages.properties file:
    nameLabel=Name:
    Where your .jpf file is /jpfDirectory/Controller.jpf.
    I hope this helps.
    Thomas
    Claus Ljunggren wrote:
    Hi,
    I want to i18n my netui:button. Does it have to be this uggly?
    <i18n:getMessage messageName="labelCreateContact"
    id="createContact"/>
    <netui:button type="submit" value="<%=createContact%>"/>
    btw, workshop claims that it doesn't know the createContact inside <%=%>,
    but it works when run.
    TIA,
    Claus Ljunggren

Maybe you are looking for

  • Someone changed the forward of my iCloud email

    I received an email from Apple iCloud notifying me that a forward had been added for my iCloud email account (I understand that we have the option of iCloud.com, me.com, mac.com but I don't even use iCloud.com, I use mac.com). The Apple notification

  • Why is RTSP now being blocked on my Mac?

    I was trying to play a streaming video from CSPAN and all of a sudden it stopped working. I can no longer play streaming videos either with a plugin or directly using either Real Player or Quicktime. Well, I can play videos as long as they use an htt

  • Mail Server Polling

    Hi In my senario Mail Adapter is Sender not a Reciever.... How do mail sever knows which mail has to send.... how to test my senario?

  • Why can't I delete rented movies on my ipad after watching?

    I've watched the movie I'm done with it and it's still there taking up space. I want it off my ipad!

  • Possibility of Personalization of SAP Visual Composer Report by each user ?

    Hi All, Is it possible to personalise the Visual composer based report. For example: Can the end user personalise hide/unhide few columns of the Table view and save so that it remains the same for this user in his/her next login aswell. Regards, Sree