Replacing characters in a Unicode system

I now need to search all imported texts and replace this `` with a regular double-quote ". Now, I know how to do this in non-Unicode systems, but am totally lost when it comes to UC.
Could someone please help? TIA - Roman.
P.S. Never mind, I got it...

Using convert method from cl_abap_conv_in_ce solves the problem.

Similar Messages

  • Hexidecimal Non-Allowed Characters in a Unicode System

    We have a function module that we've written to replace non-permitted characters with a space in transfer rules.  We see a lot of invisible hexidecimal characters coming in free form text fields.  This work great for English.  However, we have a Unicode system with other languages installed.  We are also getting the hex characters in other character sets. 
    Has anyone dealt with this issue and if so what was you solution?
    Thanks!
    Al

    Hello aLaN,
    how r u ?
    Hey we have faced the problem with Hexadecimal characters, but not the same issue. In our case the problem was in the Source System. In the DB Tables we had some unwanted characters, that was showing some errors while data loading, particularly ERROR 18.
    So we resolved it by changing the Source System data.
    I have already posted for the hexadecimal issue.... the replies was
    I think this is related to Invalid character issues.. or SPaces setting in RSKC..
    may be you want to look at eh following post..
    Re: invalid characters
    /people/siegfried.szameitat/blog/2005/07/18/text-infoobjects-part-1
    Example:
    let us say..
    1. Check in RSKC for allowed characters..
    2. Add a code in the update rule to restrict the texts contains..
    !"%&''()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' are allowed characters in RSKC transaction then other than the above character is 'Invalid' including the smaller case letters and will throw the hex.. error.
    since this is from database system even 'NULL' datatype from there is not visble to the eyes and can cause the failure.
    Hope this helps
    Best Regards....
    Sankar Kumar
    91 98403 47141

  • Printing of chinese characters in ecc6 unicode system

    Hi,
    We are having problem printing out debit memo that have chinese characters. In the preview, the chinese characters are displayed properly.
    but when it's printed out, the chinese characters are displayed as "###". we are using device type "SAPWIN: Rel.4.x/SAPIpd 4.09+".
    When we use device type "CNSAPWIN", the chinese characters are displayed as "ÂíÀ´Î÷ÑÇ".
    Appreciate if anyone can tell me how to solve this problem.

    Hi,
    It is a printer issue. The printer needs to be configured to support the Chinese chanracter set.
    Normally when we are able to view the chinese characters properly in the print preview but not properly displayed on the output it is surely a printer problem.
    This kind of problem we faced many times.
    Thanks
    Ram

  • Sending special characters to non-unicode non-sap system: user exit

    Hello All,
    We are sending data from a SAP unicode system to a non-sap non-unicode system via IDOC. The idoc is standard idoc GLMAST which contains gl account information.
    Some text fields contained in this idoc can contain special Polish characters. If they are sent unchanged to the non-unicode system, they give problems in the destination system.
    What we would like to do is to build a user exit to convert these special characters, for example : 'é' becomes 'e', ....
    We used enhancement object ALE00001, function EXIT_SAPLBD11_001 and implemented it, but it seems this exit is not called. Can this user exit be used for this functionality?
    We also tried to change in SM59 the type of system of the destination to non-unicode, so that SAP replaces those special characters by #. But, than the error 02. Codepage not found is given in the idoc. Note: the link to the external system is not set up yet, so no actual connection is possible. Is this why we receive this error, and will this functionality work in the end with a non-sap system?
    Thanks for helping.
    Kind Regards,
    Bart Pelsmaekers

    I faced this problem in many project i implement below logic.
      DATA: c_splchar(2) VALUE '90',
            c_defaultchar(1) type c VALUE '#'.
    You have to move one by one character to this function module
              CALL FUNCTION 'URL_ASCII_CODE_GET'
                   EXPORTING
                        trans_char = spl_char
                   IMPORTING
                        char_code  = spl_code.
    All non unicode(Better you check) are always greater than 90.
             if spl_code is gt c_splchar.
                move c_defaultchar to c_splchar.
             endif.       
    "Reward points if usefull"
    Thanks,
    Narayan

  • PDF conversion for chineese characters in Unicode system

    I am facing a problem while converting the SAP Script Output to PDF format for Chinese characters. 
    I am working on ECC (5.0) Unicode system.
    Scenario:
    After saving a Purchase Order an E-mail is sent to the customer - attaching the
    PO output in PDF format.  E-mail was received successfully by the receiver, but while opening the pdf all the chinese characters were displayed in junk characters in the pdf. All the English characters are properly displayed.  I tried to open the pdf file in Acrobat Reader versions 6.0, 7.0, 8.0. but no result.  I used CONVERT_OTF function module for converting the OTF format to PDF format. I tried using the fonts CNSONG also.
    I tried by executing the standard program RSTXPDFT4 for converting to PDF by giving the spool.  In the spool it is showing the Chinese characters perfectly but in the PDF the Chinese characters were were showing as Junk.
    Can you please help and advice to see the Chinese characters in PDF in Unicode systems.
    Thanks in advance.

    >
    Juraj Danko wrote:
    > Hi,
    > I have similar problem than you ... how have you solved it?
    > thanks
    > Juraj
    I found a solution, but I am not sure, if it was for this problem or
    output problem with for example PL in non-unicode systems.
    I created the input for CONVERT_OTF with CALL FUNCTION 'PRINT_TEXT'.
    PRINT_TEXT has to be called with DEVICE = 'PRINTER',
    DEVICE = 'ABAP' uses internally the wrong code page.
    You have also to set otf_options-tdprinter to a valid printer,
    if it is empty, the default printer from user settings is used.
    You can use code example from SAP note 413295.
    Before you call CONVERT_OTF, you can also check entries with 'FC' in OTF input.
    The font (see description of OTF format in SAP help) must be set like described in SAP note 144718.
    /Tibor
    Edited by: Tibor Gerke on Jan 13, 2011 10:29 AM

  • Replacement for Underscore - Hexadecimal '1F' in Unicode System

    Hi gurus,
    Can anyone provide the replacement for hexadecimal value '1F' in a Unicode system? Is there any table available?
    Regards,
    Lijo Joseph

    Hai
    Check the following Statements
    data v_var1 type x value '1F'.
    data v_len(5) type n.
    constants:
    c_ecc6_tab(1) type c value cl_abap_char_utilities=>horizontal_tab.
    constants:
    c1(3) type c value 333.
    data: begin of itab occurs 0,
    line(20) type c,
    end of itab.
    data: begin of itab1 occurs 0,
    line1(20) type c,
    line2(20) type c,
    line3(20) type c,
    end of itab1.
    concatenate c1 c_ecc6_tab c1 c_ecc6_tab c1 c_ecc6_tab c1
    into itab-line.
    concatenate c1 v_var1 c1
    into itab-line.
    open dataset c1 for input in text mode encoding default.
    describe field c1 length v_len in character mode.
    do 5 times.
    append itab.
    enddo.
    data xtab1 like itab1.
    data v_newvar like itab1-line1.
    do 10 times.
    itab1-line1 = 'abc'.
    itab1-line2 = 'def'.
    itab1-line3 = 'ghi'.
    append itab1.
    clear itab1.
    enddo.
    v_tabix = sy-tabix.
    Thanks & regards
    Sreenivasulu P

  • Unable to load German characters in NON Unicode Essbase Cube

    Hi Guys,
    This is what we want to do:
    Build a Cube for Germany on our Essbase server in US. Our users will access cube using Excel Add-In from Germany. But since the Essbase server is in US, system environment variable ESSLANG is set to English_UnitedStates.Latin1@Binary.
    The version of Essbase we are using is 7.1.3.
    What we tried & failed:
    To load German characters from our dimension build Text file, we added a header: //ESS_LOCALE German_Germany.Latin1@Default
    at the beginning of the dim build Text file hoping the rule file will understand that the file contains German characters & load it correctly. Then using EAS I load the dimensions using its corresponding rule file.
    Essbase loads the dimensions correctly, with NO Error, but when it encounters German characters it Replaces it with a Question Mark "?"
    Some of the German characters are:ß Ü ü Ö ö Ä ä Å Ä Ö
    Lastly, the reason we do not want to build Unicode cube is because Excel Add-In will not work with Unicode cubes.
    Its urgent. Please help.
    Thanks.

    The simple and easy way to check
    non-unicode character sets are not supported on unicode system any longer. Am I right?
    Transaction code i18N
    Select
    trouble shooting --> printing  test --> smartforms --> multiple scripts, select your output device and see print preview. it will display all supported characters.
    I guess, above information will be useful for closing the thread.
    Regards,
    SaiRam

  • Problem in replacing characters of a string ?

    Hello everybody,
    I want to replace a few characters with their corresponding unicode codepoint values.
    I have a userdefined method that gets the unicode codepoint value for a character.
    1. I want to know how to replace the characters and have the replaced string after the comparision is over in the for loop in my main.
    Currently , i am able to replace , but i am not able to have the replacements done in a single variable.
    The output of the code is
    e\u3006ame
    ena\u3005e
    But i want the output i require is,
    e\u3006a\u3005e
    Please offer some help in this regard
    import java.io.*;
    class Read1
         public static void main(String s[])
             String rp,snd;
             String tmp="ename";
             for(int i=0;i<tmp.length();i++)
                 snd=getCodepoint(tmp.charAt(i));
                 if(snd!=null)
                    rp=replace(tmp,String.valueOf(tmp.charAt(i)),"\\u"+snd);
                    System.out.println(rp);
    public static String replace(String source, String pattern, String replace)
         if (source!=null)
             final int len = pattern.length();
             StringBuffer sb = new StringBuffer();
             int found = -1;
             int start = 0;
             while( (found = source.indexOf(pattern, start) ) != -1)
                 sb.append(source.substring(start, found));
                 sb.append(replace);
                 start = found + len;
             sb.append(source.substring(start));
             return sb.toString();
         else return "";
    ...,Any help in this regard would be useful
    Thanks
    khurram

    This manual replacement thingy reminds me of quite an old technique, when
    64KB of memory was considered enough for 20 users (at the same time that is!)
    Suppose you have a buffer of, say, n characters. Starting at location i, a region
    of chars have to be swapped with bytes starting at location j >= i+l_i; the lengths
    of the two regions are l_i and l_j respectively.
    Suppose the following method is available:public void reverse(char[] buffer, int f, int l_f) {
       for (int t= f+l_f; --t > f; f++) {
          char tmp=buffer[f]; buffer[f]= buffer[t]; buffer[t]= tmp;
    }i.e. the above method reverses a region of characters, starting at position f
    with length l_f. Given this simple method, the original problem can be solved
    using the following simple sequence:reverse(buffer, i, j+l_j);
    reverse(buffer, i, l_j);
    reverse(buffer, i+l_j, j-i-l_i);
    reverse(buffer, j+l_j-l_i, l_i);Of course, when replacing characters we don't need the last reversal.
    kind regards,
    Jos (dinosaurus)

  • Problem publishing database contents from non-unicode to unicode system

    Hello everyone!
    We just set up a new SAP WAS based on Netweaver 2004 as a unicode system. Out problem now is that we have a content management system on our non-unicode system and that we are publishing the contents via rfc to the WAS unicode system to display the contents online. the contents are stored in our own database tables.
    The problem thereby is that many texts pasted from microsoft word contain special characters like bullets, long minus or low-9 quotation marks which are not correctly displayed in the unicode system / on the website. we already found out that it has something to do with the codepage. the sap notes say we should use 1160 instead of 1100 and that the transaction SPUMG would be helpful. but we are not able to select any tables there.
    so now we do not know what to do exactly. do we have to change something in our non-unicode system or do we have to conversion in our unicode system. and what happends if content containing special microsoft word characters is published after the spumg conversion? do we have to to this frequently?
    We would be glad if anyone could help.
    Thanks a lot!

    Hi Martin,
    thanks for your quick answer.
    You got me right. We have a local non-Unicode SAP HCM Netweaver 2004 system running a self-developed web based content management system / wiki. The texts entered in the bsp application are stored in a string field in our database table. Actually we publish the contents to a WAS 6.20 non-Unicode system with the same database tables to provide the content via BSP for the public. Everything is working fine including the special characters.
    Now we want to replace the WAS 6.20 non-Unicode system by a new WAS 7.0/2004 Unicode system. But when publishing the contents via the same RFC function module to the new system the special characters seem to be damaged. We are not able to replace them with abap commands and when they are displayed on the website we only see "boxes".
    If I get you right we have to run SPUMG on our nw 2004 non-unicode productive hcm system, right? but isn't there a danger to damage existing contents?
    Best regards,
    Stefan

  • Translate string using hex(0020) in Unicode system.

    Hello all,
    We are facing a problem of the "translate" statement in the Unicode system.
    The original statement goes as follows:
    TRANSLATE BULOG USING WS_STRING1.
    Here  BULOG is a structure and ws_string1 is declared as follows:
    DATA : ws_string1(2) type x value '0020'.
    On compilation in the new system which is Unicode enabled, the above mentioned statement removes all the '#' placed in the structure.
    We tried the following statement instead of the original TRANSLATE statement.
    TRANSLATE BULOG USING SPACE.
    But this statement leaves the '#' unchanged.
    We have already REPLACE statement after converting the structure contents into a string.
    We have also tried the convert methods of the classes CL_ABAP_CONV_IN_CE and CL_ABAP_CONV_OUT_CE.
    Hoping to receive a fast response.
    Thanks in advance.
    Zankruti.

    You might want to read ABAP Help on TRANSLATE:
    Addition 2
    ... USING  pattern
    Effect
    If you specify USING, the characters in text are converted according to the rule specified in pattern.
    pattern must be a character-type data object whose contents are interpreted as a sequence of
    character pairs.
    Your option 1 is not working because type X is "Byte field", technically it's not a character-type. Your option 2 is not working because <pattern> must be a sequence of character pairs (you had just SPACE).
    In Option 1 just change the definition from X to C or STRING.

  • ASCII characters to Character ( Unicode Conversion)

    In a non unicode system,i see logic where they are replacing any value from '00' to '1F'  with space in a variable of type X.
    In a unicode system,how this can be done since type x is no more valid.
    I am planning to convert variable to type c, but if i convert to type c, how do I replace those characters from 00 to 1F in unicode system.

    CALL METHOD cl_abap_conv_in_ce=>uccp
            EXPORTING
                    uccp = '000D'
            RECEIVING
                    char = cr.
    link:[UCCP: converts a unicode code point (hexa representation) into a character|http://wiki.sdn.sap.com/wiki/display/ABAP/CL_ABAP_CONV_IN_CE]

  • Problem replacing characters

    Hi,
    I'm trying to replace a # with a � in a unix environment using a String.replace, but unix won't recognise the �. It just prints out a ? if I print to screen or a \243 if I print to a file. If I move this file back to windows, it contains ��. I'm very confused. Please help.
    Thanks.
    Oh, here's a copy of the test code of where I'm upto at the moment after alot of messing about. Feel free to disregard this code if I'm barking completely up the wrong tree.
        public static void main(String[] args) throws Exception {
            StringBuffer stb = new StringBuffer("#1234");
            int indx = -1;
            while ((indx = stb.toString().indexOf("#")) != -1) {
                ByteArrayOutputStream baos = new ByteArrayOutputStream();
                baos = new ByteArrayOutputStream();
                baos.write((new String("�").getBytes("UTF-8")));
                String st = baos.toString("UTF-8");
                System.out.println("st = " + st);
                stb.replace(indx, indx+"#".length(), st);
            System.out.println("buffer: " + stb.toString());
            //this is just here to see what happens if I write
            //the � to a file
            FileOutputStream fos = new FileOutputStream("file.txt");
            fos.write((new String("�").getBytes("UTF-8")));
            fos.flush();
            fos.close();
        }

    For character replacement, it's more efficient to use:
    str = str.replace('#', '�'); // where str is a string
    You don't need UTF-8 encoding if the Unicode characters are in the ANSI range: U+0000 - U+00FF. Just use the default encoding. When you view the file, make sure select a font that has the glyph for that charater. On Windows, Tahoma, Courier New, or Times New Roman would suffice.
    For those beyond ANSI, then UTF-8 comes into play. Use OutputStreamWriter(fos, "UTF8") when write to file. Check http://java.sun.com/docs/books/tutorial/i18n/text/stream.html for usage. Win9x/Me Notepad can't read UTF-8 encoded files, but WinNT/2K/XP can.

  • GUI_DOWNLOAD problem in unicode system

    Hi Guru's,
    I am facing one prolem in gui_download. we are doing unicode remediation in one report. In the program  one internal table declared as of type c with length 255 and data filled into the internal table by importing the data ifrom cluster. After  that this internal table  used  by ws_download function moduel with  file type as BIN to download it  in word doc file. We replaced the function module with gui_download. It is working fine in non-unicode system but it is not downloading properly in the unicode system.
    i am unable to find what is the cause.. I tried with different different codepages giving in run time..it is not solving my problem.
    << Moderator message - Everyone's problem is important. Please do not ask for help quickly. >>
    Thanks & Regards,
    Sastry R
    Edited by: Rob Burbank on Dec 13, 2010 9:39 AM

    Hi Clemens.
    I replaced the ws_download function module with gui_download.
    here is my code
    Earlier before 6.0 code as follows
    CALL FUNCTION 'WS_DOWNLOAD'
       EXPORTING
         bin_filesize            = data_len
         filename                = p_file
         filetype                = 'BIN'
       TABLES
         data_tab                = data_tab
       EXCEPTIONS
         file_open_error         = 1
         file_write_error        = 2
         invalid_filesize        = 3
         invalid_table_width     = 4
         invalid_type            = 5
         no_batch                = 6
         unknown_error           = 7
         gui_refuse_filetransfer = 8
         OTHERS                  = 9.
    IF sy-subrc <> 0 AND no_error_dlg = space.
       MESSAGE i002(sy) WITH text-i03.    "FILE OPEN ERROR
    ENDIF.
    Replaced above with following code
      DATA:lv_fname TYPE string,
           lv_ftype(10) VALUE 'BIN',
           lv_codepage type abap_encod VALUE '4102'.
    CALL METHOD cl_gui_frontend_services=>gui_download
        EXPORTING
          bin_filesize            = data_len
          filename                = lv_fname
          filetype                = lv_ftype
          codepage                = lv_codepage
        CHANGING
          data_tab                = data_tab
        EXCEPTIONS
          file_write_error        = 1
          no_batch                = 2
          gui_refuse_filetransfer = 3
          invalid_type            = 4
          no_authority            = 5
          unknown_error           = 6
          header_not_allowed      = 7
          separator_not_allowed   = 8
          filesize_not_allowed    = 9
          header_too_long         = 10
          dp_error_create         = 11
          dp_error_send           = 12
          dp_error_write          = 13
          unknown_dp_error        = 14
          access_denied           = 15
          dp_out_of_memory        = 16
          disk_full               = 17
          dp_timeout              = 18
          file_not_found          = 19
          dataprovider_exception  = 20
          control_flush_error     = 21
          not_supported_by_gui    = 22
          error_no_gui            = 23
          OTHERS                  = 24.
      IF sy-subrc <> 0 AND no_error_dlg = space.
        MESSAGE i002(sy) WITH text-i03.    "FILE OPEN ERROR
      ENDIF.
    I tried with all othr code pages also like 4110/4103/1110/1100/1102. It is not working,
    It is giving problem in unicode system. File is downloading.but not properly..
    and when i am opening the word file it is asking me select encoding type to make document readble along with the  available text encoding formats.
    Please help me..
    Thanks & Regards,
    Sastry R

  • GUI_DOWNLOAD problems with CR+LF when transfering from unicode system

    Hi,
    I was successfuly used FM GUI_DOWNLOAD in a non-unicode systems for years. Lately I faced a challenge to rewrite my code for a unicode system. The configuration is:
    - SAP R/3 unicode system;
    - data to be downloaded at presentation server in a non-unicode codepage (cp 9504).
    I have successfuly used a GUI_DOWNLOAD-parameter CODEPAGE and the data is translated correctly when checking local file, but due to some reasons CRLF are replaced with '#' (which is default value of REPLACEMENT parameter of this function) - means at the end of each row as a result I have '##' instead of CRLF.
    My question is: how can I force correct behaviour of GUI_DOWNLOAD in order to get my output file at presentation server with CR+LF?
    Any help would be highly appreciated.
    Many thanks in advance.
    Regards,
    Ivaylo Mutafchiev
    SAP/ABAP consultant
    VBS Ltd.
    P.S. In order to find some other way to fix my problem I'm still playing with the instanciation of a CL_ABAP_CONV_OBJ and its methods create & convert, but without success for now - resulted strings are not as expected.

    Hi,
    in fact, I never placed CRLF in my lines before your suggestion. The rest was done by the FM 'GUI_DOWNLOAD'. It works fine even when I use unicode file as output - means I got my CRLF at the end of the record in MY OUTPUT FILE ONLY but not in my internal table - I never placed CR+LF in there.
    The problem occures when I tried to use GUI_DOWNLOAD with parameter CODEPAGE = '9504' (some non-unicode codepage), and the original data (my internal table) is in unicode. Then (in my opinion) this function doesn't translate the unicoded CR+LF into non-unicode ones (if thats possible at all, I can't be sure) and the result is '##' in the output file.
    I checked the value of CL_ABAP_CHAR_UTILITIES=>CR_LF by getting it in my variable - and it is '##'.
    Whet should I put into this class-attribute in order to get it working in this scenario? I have no idea...
    The attribute type is ABAP_CR_LF - which is char 2.
    What next?
    Thanks,
    Ivaylo

  • Create PDF with CONVERT_OTF in Unicode system

    Hi,
    I try to create a PDF file from OTF input with function module CONVERT_OTF.
    It worked in a non-unicode environment with problems.
    If I use the same coding in a unicode system (6.0) and OTF input
    contains real doublebyte unicode characters like arabian/greek characters,
    the PDF file shows wrong characters like "ÔÑÙàÜÐÕÞÙ Ôâé" instead of
    "&#1512;&#1489;&#1491;&#1500; &#1492;&#1510;&#1493;&#1512; %&#1500;&#1493;&#1506;&#1492;" or "&#1033;&#1034;&#1035;&#1036;&#1038;&#1039;&#1040;&#1041;&#1042;&#1043;&#972;ψχΩΨΧ".
    I enable the developer trace for CONVERT_OTF, but the trace also
    shows the correct unicode characters.
    If I create a PDF file with PDF Creater with the same input on my computer,
    PDF file looks fine. SAP PDF file uses Font enconding 'Windows', PDF Creator
    uses a 'Custom' font encoding.
    Any idea, that's going wrong here?
    I install TrueType fonts like described in SAP note 999712 with no success,
    but this note refers to SAP_BASIS 011, which is not yet available on SAP Marketplace
    (latest is 010 today).
    Is there any other configuration to enable PDF unicode support?
    Print preview from other SAP transactions looks fine with unicode characters.
    thanks for help
    /Tibor

    >
    Juraj Danko wrote:
    > Hi,
    > I have similar problem than you ... how have you solved it?
    > thanks
    > Juraj
    I found a solution, but I am not sure, if it was for this problem or
    output problem with for example PL in non-unicode systems.
    I created the input for CONVERT_OTF with CALL FUNCTION 'PRINT_TEXT'.
    PRINT_TEXT has to be called with DEVICE = 'PRINTER',
    DEVICE = 'ABAP' uses internally the wrong code page.
    You have also to set otf_options-tdprinter to a valid printer,
    if it is empty, the default printer from user settings is used.
    You can use code example from SAP note 413295.
    Before you call CONVERT_OTF, you can also check entries with 'FC' in OTF input.
    The font (see description of OTF format in SAP help) must be set like described in SAP note 144718.
    /Tibor
    Edited by: Tibor Gerke on Jan 13, 2011 10:29 AM

Maybe you are looking for

  • How do I get Vertus Fluid Mask 3 to show up under filters in Photoshop CS6

    The Fluid mask installer doesn't recognize Photoshop 6.  I selected other and tried to put the "Vertus Fluid Mask 3" folder that contains the files dbghelper.dll & FluidMask3.8bf into the Photoshop 6's plugins directory.  It's not available within Ph

  • Odd address window behavior

    Mail Version 2.0.5 (746/746.2) This is how I have my Mail.app window arranged: I have the main Mail window open on the left, the address window on the right and the Activity Viewer window below the address window. If I right click > Edit Card on a na

  • Ctxsys.context index on registered schema xmltype column

    9iR2 Is it possible to create a text index (indextype is ctxsys.context) on a schema registered xmltype column? I tried it, the index creation works fine. After the first insert statement the Oracle process seems to hang. (CPU 100%, increasing memory

  • Issue with office web apps sorry, there was a problem and we can't open this document.

    Hi All, i am having issue when trying to open the word file using office web apps farm. i have multi-tier farm (2 wfe, 2 apps and 2 owa). is that something wrong with certificate or the way AAM configured. Can anyone give me some direction please?

  • Receiving Notifications via email

    I have a user who is not receiving an email notification when a requisition is forwarded to her for approval. Can anyone tell me what could be wrong?