Issue with language specific characters combined with AD-Logon to BO platform and client tools

We are using SSO via Win AD to logon to BO-Launchpad. Generally this is working which means for Launch Pad no manual log on is needed. But  this is not working for users which have language specific letters in their AD name (e.g. öäüéèê...).
What we have tried up to now:
If the AD-User name is Test-BÖ the log on is working with the user name Test-BO with logon type AD
If the logon Type "SAP" is used than it is possible to use the name Test-BÖ as the username
Generally it is no problem in AD to use language specific letters (which means it is possible to e.g. log on to Windows with the user Test-BÖ)
It is possible to read out the AD attributes from BO side and add them to the user. Which means in the user attributes the AD name Test-BÖ is shown via automatic import from AD. So it's not the problem that the character does not reach BO.
I have opened a ticket concerning that. SAP 1th level support is telling me that this is not a BO problem. They say it is a problem of Tomcat. I don't believe that because the log on with authentification type SAP is working.
I have set up the same combination (AD User Test-BÖ with SAP User Test-BÖ) as a single sign on authentification in SAP BW and there it is working without problems.
Which leads me to the conlusion: It is not a problem of AD. It is something which is connected to the BO platform but only combined with logon type AD because SAP Logon is working with language specific characters.

I have found this article with BO support:
You cannot add a user name or an object name that only differs by a character with a diacritic mark
Basically this means AD stores the country specific letters as a base letter internally. Which means that if you have created a user with a country specific letter in the name you can also logon with the Base letter to Windows.
SAP-GUI and Windows are maybe replacing the country specific letters by the base letter. Due to that SSO is working. BO seems not to be able to do that. Up to now the supporter from BO is telling me that this is not a BO problem.
Seems to be magic that the colleagues of SAP-GUI are able to to it.

Similar Messages

  • Problem with language specific characters on e-mail sending

    Hi,
    Problem with language specific characters on e-mail sending.
    How can it be fixed?
    Thanks.

    Hi,
    try to work on the charecter code set UTF-8 or UTF-16. You can define this in html.
    Or encode the charecter using java script.
    Hope this may help you.
    Deepak!!!

  • Issues with language-specific characters and Multi Lexer

    I want to create a text index with global lexer and different languages. But how to create the index to satisfy all languages?
    Oracle EE 10.2.0.4 (UTF8) on Solaris 10
    1.) Create global lexer with german as default and czech, turkish as additional languages.
    begin
         ctx_ddl.drop_preference('global_lexer');
         ctx_ddl.drop_preference('german_lexer');
         ctx_ddl.drop_preference('turkish_lexer');
         ctx_ddl.drop_preference('czech_lexer');
    end;
    begin
         ctx_ddl.create_preference('german_lexer','basic_lexer');
         ctx_ddl.create_preference('turkish_lexer','basic_lexer');
         ctx_ddl.create_preference('czech_lexer','basic_lexer');
         ctx_ddl.create_preference('global_lexer', 'multi_lexer');
    end;
    begin
         ctx_ddl.set_attribute('german_lexer','composite','german');
         ctx_ddl.set_attribute('german_lexer','mixed_case','no');
         ctx_ddl.set_attribute('german_lexer','alternate_spelling','german');
         ctx_ddl.set_attribute('german_lexer','base_letter','yes');
         ctx_ddl.set_attribute('german_lexer','base_letter_type','specific');
         ctx_ddl.set_attribute('german_lexer','printjoins','_');
         ctx_ddl.set_attribute('czech_lexer','mixed_case','no');
         ctx_ddl.set_attribute('czech_lexer','base_letter','yes');
         ctx_ddl.set_attribute('czech_lexer','base_letter_type','specific');
         ctx_ddl.set_attribute('czech_lexer','printjoins','_');
         ctx_ddl.set_attribute('turkish_lexer','mixed_case','no');
         ctx_ddl.set_attribute('turkish_lexer','base_letter','yes');
         ctx_ddl.set_attribute('turkish_lexer','base_letter_type','specific');
         ctx_ddl.set_attribute('turkish_lexer','printjoins','_');
         ctx_ddl.add_sub_lexer('global_lexer', 'default', 'german_lexer');
         ctx_ddl.add_sub_lexer('global_lexer', 'czech',   'czech_lexer',   'CZH');
         ctx_ddl.add_sub_lexer('global_lexer', 'turkish', 'turkish_lexer', 'TRH');
    end;
    /2.) Create table and insert data
    drop table text_search;
    create table text_search (
         lang   varchar2(5)
       , name   varchar2(100)
    insert into text_search(lang, name) values ('DEH', 'Strauß');
    insert into text_search(lang, name) values ('DEH', 'Möllbäck');
    insert into text_search(lang, name) values ('TRH', 'Öğem');
    insert into text_search(lang, name) values ('TRH', 'Öger');
    insert into text_search(lang, name) values ('CZH', 'Tomáš');
    insert into text_search(lang, name) values ('CZH', 'Černínová');
    commit;3.) The index creation now produces different results depending on the language settings:
    -- *Option A)*
    alter session set nls_language=german;
    drop index i_text_search;
    create index i_text_search on text_search (name)
       indextype is ctxsys.context
            parameters ('
                    section group CTXSYS.AUTO_SECTION_GROUP
                    lexer global_lexer language column lang
                    memory 300000000'
    select * from dr$i_text_search$I;
    -- *Option B)*
    alter session set nls_language=turkish;
    drop index i_text_search;
    create index i_text_search on text_search (name)
       indextype is ctxsys.context
            parameters ('
                    section group CTXSYS.AUTO_SECTION_GROUP
                    lexer global_lexer language column lang
                    memory 300000000'
    select * from dr$i_text_search$I;
    -- *Option C)*
    alter session set nls_language=czech;
    drop index i_text_search;
    create index i_text_search on text_search (name)
       indextype is ctxsys.context
            parameters ('
                    section group CTXSYS.AUTO_SECTION_GROUP
                    lexer global_lexer language column lang
                    memory 300000000'
    select * from dr$i_text_search$I;And now I get different:
    Option A)
    dr$i_text_search$I with nls_language=german:
    STRAUß
    STRAUSS
    MOLLBACK
    OĞEM
    OGER
    TOMAŠ
    ČERNINOVA
    Problems, e.g.:
    A turkish client now does not find his data (the select returns 0 rows)
    alter session set nls_language=turkish;
    select * from text_search
    where contains (name, 'Öğem') > 0;
    Option B)
    dr$i_text_search$I with nls_language=turkish:
    STRAUß
    STRAUSS
    MÖLLBACK
    ÖĞEM
    ÖGER
    TOMAŠ
    ČERNINOVA
    Problems, e.g.:
    A czech client now does not find his data (the select returns 0 rows)
    alter session set nls_language=czech;
    select * from text_search
    where contains (name, 'Černínová') > 0;
    Option C)
    dr$i_text_search$I with nls_language=czech:
    STRAUß
    STRAUSS
    MOLLBACK
    OĞEM
    OGER
    TOMAS
    CERNINOVA
    Problems, e.g.:
    A turkish client now does not find his data (the select returns 0 rows)
    alter session set nls_language=turkish;
    select * from text_search
    where contains (name, 'Öğem') > 0;
    ----> How can these problems be avoided? What am I doing wrong?

    You need to change your base_letter_type from specific to generic. Also, if you are going to use both alternate_spelling and base_letter in your german_lexer, then you might want to set override_base_letter to true. Please see the run of your code below, with those changes applied. The special characters got mangled in my spool file, but hopefully you get the idea.
    SCOTT@orcl_11gR2> begin
      2            ctx_ddl.drop_preference('global_lexer');
      3            ctx_ddl.drop_preference('german_lexer');
      4            ctx_ddl.drop_preference('turkish_lexer');
      5            ctx_ddl.drop_preference('czech_lexer');
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> begin
      2            ctx_ddl.create_preference('german_lexer','basic_lexer');
      3            ctx_ddl.create_preference('turkish_lexer','basic_lexer');
      4            ctx_ddl.create_preference('czech_lexer','basic_lexer');
      5            ctx_ddl.create_preference('global_lexer', 'multi_lexer');
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> begin
      2            ctx_ddl.set_attribute('german_lexer','composite','german');
      3            ctx_ddl.set_attribute('german_lexer','mixed_case','no');
      4            ctx_ddl.set_attribute('german_lexer','alternate_spelling','german');
      5            ctx_ddl.set_attribute('german_lexer','base_letter','yes');
      6            ctx_ddl.set_attribute('german_lexer','base_letter_type','generic');
      7            ctx_ddl.set_attribute('german_lexer','override_base_letter', 'true');
      8            ctx_ddl.set_attribute('german_lexer','printjoins','_');
      9 
    10            ctx_ddl.set_attribute('czech_lexer','mixed_case','no');
    11            ctx_ddl.set_attribute('czech_lexer','base_letter','yes');
    12            ctx_ddl.set_attribute('czech_lexer','base_letter_type','generic');
    13            ctx_ddl.set_attribute('czech_lexer','printjoins','_');
    14 
    15            ctx_ddl.set_attribute('turkish_lexer','mixed_case','no');
    16            ctx_ddl.set_attribute('turkish_lexer','base_letter','yes');
    17            ctx_ddl.set_attribute('turkish_lexer','base_letter_type','generic');
    18            ctx_ddl.set_attribute('turkish_lexer','printjoins','_');
    19 
    20            ctx_ddl.add_sub_lexer('global_lexer', 'default', 'german_lexer');
    21            ctx_ddl.add_sub_lexer('global_lexer', 'czech',   'czech_lexer',   'CZH');
    22            ctx_ddl.add_sub_lexer('global_lexer', 'turkish', 'turkish_lexer', 'TRH');
    23  end;
    24  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> drop table text_search;
    Table dropped.
    SCOTT@orcl_11gR2> create table text_search (
      2         lang      varchar2(5)
      3       , name      varchar2(100)
      4  );
    Table created.
    SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('DEH', 'Strauß');
    1 row created.
    SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('DEH', 'Möllbäck');
    1 row created.
    SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('TRH', 'Öğem');
    1 row created.
    SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('TRH', 'Öger');
    1 row created.
    SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('CZH', 'Tomáš');
    1 row created.
    SCOTT@orcl_11gR2> insert into text_search(lang, name) values ('CZH', 'ÄŒernÃnová');
    1 row created.
    SCOTT@orcl_11gR2> commit;
    Commit complete.
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> -- *Option A)*
    SCOTT@orcl_11gR2> alter session set nls_language=german;
    Session altered.
    SCOTT@orcl_11gR2> drop index i_text_search;
    drop index i_text_search
    ERROR at line 1:
    ORA-01418: Angegebener Index ist nicht vorhanden
    SCOTT@orcl_11gR2> create index i_text_search on text_search (name)
      2       indextype is ctxsys.context
      3            parameters ('
      4                 section group CTXSYS.AUTO_SECTION_GROUP
      5                 lexer global_lexer language column lang
      6                 memory 300000000'
      7            );
    Index created.
    SCOTT@orcl_11gR2> select token_text from dr$i_text_search$I;
    TOKEN_TEXT
    AYEM
    AŒERNA
    CK
    GER
    LLBA
    MA
    NOVA
    STRAUAY
    TOMA
    9 rows selected.
    SCOTT@orcl_11gR2> alter session set nls_language=turkish;
    Session altered.
    SCOTT@orcl_11gR2> select * from text_search
      2  where contains (name, 'Öğem') > 0;
    LANG
    NAME
    TRH
    Öğem
    1 row selected.
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> -- *Option B)*
    SCOTT@orcl_11gR2> alter session set nls_language=turkish;
    Session altered.
    SCOTT@orcl_11gR2> drop index i_text_search;
    Index dropped.
    SCOTT@orcl_11gR2> create index i_text_search on text_search (name)
      2       indextype is ctxsys.context
      3            parameters ('
      4                 section group CTXSYS.AUTO_SECTION_GROUP
      5                 lexer global_lexer language column lang
      6                 memory 300000000'
      7            );
    Index created.
    SCOTT@orcl_11gR2> select token_text from dr$i_text_search$I;
    TOKEN_TEXT
    AYEM
    AŒERNA
    CK
    GER
    LLBA
    MA
    NOVA
    STRAUAY
    TOMA
    9 rows selected.
    SCOTT@orcl_11gR2> alter session set nls_language=czech;
    Session altered.
    SCOTT@orcl_11gR2> select * from text_search
      2  where contains (name, 'ÄŒernÃnová') > 0;
    LANG
    NAME
    CZH
    ÄŒernÃnová
    1 row selected.
    SCOTT@orcl_11gR2>
    SCOTT@orcl_11gR2> -- *Option C)*
    SCOTT@orcl_11gR2> alter session set nls_language=czech;
    Session altered.
    SCOTT@orcl_11gR2> drop index i_text_search;
    Index dropped.
    SCOTT@orcl_11gR2> create index i_text_search on text_search (name)
      2       indextype is ctxsys.context
      3            parameters ('
      4                 section group CTXSYS.AUTO_SECTION_GROUP
      5                 lexer global_lexer language column lang
      6                 memory 300000000'
      7            );
    Index created.
    SCOTT@orcl_11gR2> select token_text from dr$i_text_search$I;
    TOKEN_TEXT
    AYEM
    AŒERNA
    CK
    GER
    LLBA
    MA
    NOVA
    STRAUAY
    TOMA
    9 rows selected.
    SCOTT@orcl_11gR2> alter session set nls_language=turkish;
    Session altered.
    SCOTT@orcl_11gR2> select * from text_search
      2  where contains (name, 'Öğem') > 0;
    LANG
    NAME
    TRH
    Öğem
    1 row selected.
    SCOTT@orcl_11gR2>

  • Runtime.exec() with language specific chars (umlauts)

    Hello,
    my problem is as follows:
    I need to run the glimpse search engine from a java application on solaris using JRE 1.3.1 with a search pattern containing special characters.
    Glimpse has indexed UTF8 coded XML files that can contain text with language specific characters in different languages (i.e. german umlauts, spanish, chinese). The following code works fine on windows and with JRE 1.2.2 on solaris too:
    String sSearchedFreeText = "Tür";
    String sEncoding = "UTF8";
    // Convert UTF8 search free text
    ByteArrayOutputStream osByteArray = new ByteArrayOutputStream();
    Writer w = new OutputStreamWriter(osByteArray, sEncoding);
    w.write(sSearchedFreeText);
    w.close();
    // Generate process
    String commandString = "glimpse -y -l -i -H /data/glimpseindex -W -L 20 {" + osByteArray.toString() + "}";
    Process p = Runtime.getRuntime().exec(commandString);
    One of the XML files contains:
    <group topic="service-num">
    <entry name="id">7059</entry>
    <entry name="name">T&#195;&#188;rverkleidung</entry>
    </group>
    Running the java code with JRE 1.2.2 on solaris i get following correct commandline
    glimpse -y -l -i -H /data/glimpseindex -W -L 20 {T&#195;&#188;rverkleidung}
    --> glimpse finds correct filenames
    Running it with JRE 1.3.1 i get following incorrect commandline
    glimpse -y -l -i -H /data/glimpseindex -W -L 20 {T??rverkleidung}
    --> glimpse finds nothing
    JRE 1.2.2 uses as default charset ISO-8859-1 but JRE 1.3.1 uses ASCII on solaris.
    Is it possible to change the default charset for the JVM in solaris environment?
    Or is there a way to force encoding used by Runtime.exec() with java code?
    Thanks in advance for any hints.
    Karsten

    osByteArray.toString()Yes, there's a way to force the encoding. You provide it as a parameter to the toString() method.

  • Language specific characters with JDBC

    Does anybody know how to insert language specific characters to Oracle tables using JDBC and without the overhead of unicode conversion back and forth?
    At the moment, all we can do is to convert those characters to unicode when inserting, and perform a reverse conversion when getting back from a resultset. This is cumbersome in large text data.
    Is there a way to configure the RDBMS and/or the operating system for this purpose? We are using Oracle 7.3.4 on Windows NT 4.0 SP5, Oracle JDBC Driver 8.1.6, and Java Web Server 2.0 (JDBC 1.0 compliant). Suggestions for Oracle 8.1.6 and Solaris 2.6 will also be appreciated.
    Ozan & Serpil

    Hi Jeremy,
    Below is meta tags for Turkish
    lt & meta http-equiv="Content-Type" content="text/html; charset=windows-1254" / & gt
    lt & meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-9"  / & gt
    lt & meta http-equiv="Content-Language" content="tr" / & gt
    I tryed but result is the same.
    I think .irpt has no Turkish support.
    Thanks.

  • Copy paste text from pdf exported from Microsoft.Reporting.WinForms.ReportViewer control with Czech specific characters produced box charactex or ?.

    Used Visual studio 2012. In our project there is used the Microsoft.Reporting.WinForms.ReportViewer control. In the report handled by the control are TextBoxs with a text with Czech specific characters e.g. (ř, ě, ...) . When exporting the report to pdf,
    characters are displayed correctly. However when the text with czech characters in the pdf if copied and  placed into the seach box in the pdf document only box characters are displayed. The TextBox in the report use the default font Arial. When the report
    is exported to Word, and then the Word document is saved as a pdf document, its ok. Coping a text with Czech charactes in the result pdf document and pasting into the search box displays again Czech characters not box characters.
    Also when in the report handled by the ReportViewer control are several Tex Boxes and some of the boxes contains Czech characters and some not, after exporting to a pdf document there is problem with text selection. When in the pdf document I'm trying to
    select several paragraphs, some with Czech characters and some without them, selection behaves strangely and jumps from one paragraph to another unexpectedly.

    Hi,
    did you managed to avoid those squares?
    BTW: if any such char. is encountered in a line, the entire line of text is grabbled.
    I've tried even the ReportViewer from MSSQL 2014, but got the same problem. When I've tried IL Spy, I found a code, where it is checked if the PDFFont is composite - depending on that a glyph is created. But that still only a guess.
    I've tried Telerik's reporting, they have similar problem (beside other), but not with the special characters. They produced scuares for some sequences like: ft, fi, tí.
    Please give any info you got.
    Until then my advices for you:
    a) try JasperReports (seems theyre most advanced, although it is java)
    b) Developer express has quiet quality reports - and it seems they got those special chars. right :D
    c) I created a ticket and waiting for Telerik's response (but if I had to choose reporting, I vould stick with a) or b)

  • How to not index characters combined with numbers?

    According to the Oracle Text Reference (10g or 11g) regarding a "stopclass", currently, only the NUMBERS class is supported and it is not possible to create a custom stopclass. So is there another way to not index text that contains characters combined with a number (such as "123ABC" or "ABC123")? If the text contains a number (anywhere in it) I don't want it indexed. Otherwise, I end up with a lot of junk representing 90% of my index and causes a maintenance pain...Essentially going from 300K rows in the index table to over 13 Million.

    The only thing that I can think of is to use a user_datastore with a procedure that loops through each token in the row, eliminating those that have numbers. Please see the demonstration below.
    SCOTT@orcl_11gR2> CREATE TABLE test_tab
      2    (test_col  VARCHAR2 (60))
      3  /
    Table created.
    SCOTT@orcl_11gR2> INSERT ALL
      2  INTO test_tab VALUES ('worda abc123 wordc')
      3  INTO test_tab VALUES ('worda 123abc wordc')
      4  INTO test_tab VALUES ('def456 wordb wordc')
      5  INTO test_tab VALUES ('worda wordb 789ghi')
      6  SELECT * FROM DUAL
      7  /
    4 rows created.
    SCOTT@orcl_11gR2> CREATE OR REPLACE PROCEDURE test_proc
      2    (p_rowid IN           ROWID,
      3       p_clob     IN OUT NOCOPY CLOB)
      4  AS
      5    v_clob            CLOB;
      6    v_token            VARCHAR2 (100);
      7  BEGIN
      8    SELECT test_col || ' '
      9    INTO   p_clob
    10    FROM   test_tab
    11    WHERE  ROWID = p_rowid;
    12    WHILE INSTR (p_clob, '  ') > 0 LOOP
    13        p_clob := REPLACE (p_clob, '  ', ' ');
    14    END LOOP;
    15    WHILE LENGTH (p_clob) > 1 LOOP
    16        v_token := SUBSTR (p_clob, 1, INSTR (p_clob, ' '));
    17        IF v_token = TRANSLATE (v_token, '1234567890', '        ') THEN
    18          v_clob := v_clob || v_token;
    19        END IF;
    20        p_clob := LTRIM (SUBSTR (p_clob, INSTR (p_clob, ' ')));
    21    END LOOP;
    22    p_clob := v_clob;
    23  END test_proc;
    24  /
    Procedure created.
    SCOTT@orcl_11gR2> SHOW ERRORS
    No errors.
    SCOTT@orcl_11gR2> EXEC CTX_DDL.CREATE_PREFERENCE ('test_store', 'USER_DATASTORE')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC CTX_DDL.SET_ATTRIBUTE ('test_store', 'PROCEDURE', 'test_proc')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> CREATE INDEX test_idx
      2  ON test_tab (test_col)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  PARAMETERS ('DATASTORE test_store')
      5  /
    Index created.
    SCOTT@orcl_11gR2> SELECT token_text FROM dr$test_idx$i
      2  /
    TOKEN_TEXT
    WORDA
    WORDB
    WORDC
    3 rows selected.
    SCOTT@orcl_11gR2>

  • Language specific characters changes when .irpt executed.

    Hi,
    When i execute .irtp page, language specific characters changes to strange signs.
    How can it be solved?
    Thanks.

    Hi Jeremy,
    Below is meta tags for Turkish
    lt & meta http-equiv="Content-Type" content="text/html; charset=windows-1254" / & gt
    lt & meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-9"  / & gt
    lt & meta http-equiv="Content-Language" content="tr" / & gt
    I tryed but result is the same.
    I think .irpt has no Turkish support.
    Thanks.

  • Problem with language specific letters in Translation Builder editor

    Hello,
    I'm trying to translate some reports from Slovenian to Croatian using OTB, but as soon as I scroll up or down through translation form some Croatian language specific letters (čćžšđ) either convert to c (čć) or d (đ) or become "unreadable" (šž). The latest (šž) are displayed correctly on the report when strings are exported back to RDF file.
    According Troubleshooting section in OTB help I tried to change both base and translation font but with no success.
    Any experience, any hint or trick?
    Thanks in advance.
    Dev6i patch10
    RDBMS=Oracle10g
    WinXPsp2
    NLS_LANG=CROATIAN_CROATIA.EE8MSWIN1250

    Naveen,
    This is more of a portal problem.
    First, you should submit an OSS message to get the <b>best support possible</b> from SAP.
    Second, if you don't like that solution, THEN come back and post it on SDN. You will get better answers in the Enterprise Portal forum here on SDN.
    Regards,
    Greg

  • Key not typing letter, but working with Apple key in combination with lette

    I have an Indigo iBook. Just today the "f" key has stopped working - at least for tying the letter itself. When I press it in combination with the Apple key however, I get the "find" box to come up, so in some way the key is working on the keyboard. We have removed the cap and directly pressed the plastic underneath; no difference. Slso hooked up an external keyboard - still no f!!!!!
    After searching the forums, we have found that many people have had luck removing their keyboard and checking the ribbon cable or the airport spring - do not want to do something so drastic if there is an easier fix.
    Thanks very much!
    iBook Indigo   Mac OS X (10.0.x)  

    Just an update if anyone has any more ideas - (troy enn - thanks, but nothing was assigned in my Universal Access to the f key).
    The f letter works in upper case, just not lower case. So, for now, I am just using the character palette, and inserting an f if I need it - thankfully, for some reason I don't seem to use it all that much!

  • Smart card logon with third party CA combined with ADFS to Office 365

    Greetings,
    I've been trying figure out how to implement ADFS to Office 365 in MS cloud in our environment, with little luck. I have a working 2012 domain and we are already using smart card logon on Windows 7/8 workstations. Certificates on smart cards are issued by
    3rd party CA. This far every thing is fine and working, necessary root certificates are added to trusted Trusted Root Certification Authorities, UPN suffixes and users' UPNs are set according to UPN on the certificates and users successfully log on to
    workstations with smart cards.
    Now I face the requirement to enable SSOto Office 365 with accounts from our AD. I've been told by our MS partner and Dr. Google that in order to do that user account name (upn) in AD and in O365 need to match. Now the fact that account UPN in our AD is
    not usable in O365 (because it is set to match 3rd party certificate UPN) and I have not found a way to enable smart card log on without changing UPN in AD. 
    Does anyone has experience of such a configuration? Is it possible to use AD federation to O365 at all in our case?
    Best regards, and thanks in advance
    Timo

    On Fri, 25 Apr 2014 09:27:05 +0000, Timo Kallioniemi wrote:
    Now I face the requirement to enable SSOto Office 365 with accounts from our AD. I've been told by our MS partner and Dr. Google that in order to do that user account name (upn) in AD and in O365 need to match. Now the fact that account UPN in our AD
    is not usable in O365 (because it is set to match 3rd party certificate UPN) and I have not found a way to enable smart card log on without changing UPN in AD. 
    Does anyone has experience of such a configuration? Is it possible to use AD federation to O365 at all in our case?
    This is not a general Windows server security issue. You should post your
    question in an O365 support forum.
    http://community.office365.com/en-us/f/default.aspx
    Paul Adare - FIM CM MVP
    Technology is dominated by two types of people: Those who understand
    what they do not manage. Those who manage what they do not understand.
    -- Putt's Law

  • Language specific characters

    Hi!
    My reports does not come out with norwegian special characters. My forms seems to be ok though. What system variables need to be set for the reports to be able to show norwegian special characters?
    Regards, Morten

    You'll need to set your NLS_LANG environment variable / registry setting correcly with the appropriate country, territory and character set (I don't know what these are for Norwegian off the top of my head, but the documentation should cover this). Don't forget that if you're running through the server then you'll need to make sure that this value is also set in the server environment. Finally, make sure that the characters you're trying to display are included in the font that you're using in the report.
    Hope this helps,
    Danny

  • [8i] Need help with full outer join combined with a cross-join....

    I can't figure out how to combine a full outer join with another type of join ... is this possible?
    Here's some create table and insert statements for some basic sample data:
    CREATE TABLE     my_tab1
    (     record_id     NUMBER     NOT NULL     
    ,     workstation     VARCHAR2(4)
    ,     my_value     NUMBER
         CONSTRAINT my_tab1_pk PRIMARY KEY (record_id)
    INSERT INTO     my_tab1
    VALUES(1,'ABCD',10);
    INSERT INTO     my_tab1
    VALUES(2,'ABCD',15);
    INSERT INTO     my_tab1
    VALUES(3,'ABCD',5);
    INSERT INTO     my_tab1
    VALUES(4,'A123',5);
    INSERT INTO     my_tab1
    VALUES(5,'A123',10);
    INSERT INTO     my_tab1
    VALUES(6,'A123',20);
    INSERT INTO     my_tab1
    VALUES(7,'????',5);
    CREATE TABLE     my_tab2
    (     workstation     VARCHAR2(4)
    ,     wkstn_name     VARCHAR2(20)
         CONSTRAINT my_tab2_pk PRIMARY KEY (workstation)
    INSERT INTO     my_tab2
    VALUES('ABCD','WKSTN 1');
    INSERT INTO     my_tab2
    VALUES('A123','WKSTN 2');
    INSERT INTO     my_tab2
    VALUES('B456','WKSTN 3');
    CREATE TABLE     my_tab3
    (     my_nbr1     NUMBER
    ,     my_nbr2     NUMBER
    INSERT INTO     my_tab3
    VALUES(1,2);
    INSERT INTO     my_tab3
    VALUES(2,3);
    INSERT INTO     my_tab3
    VALUES(3,4);And, the results I want to get:
    workstation     sum(my_value)     wkstn_name     my_nbr1     my_nbr2
    ABCD          30          WKSTN 1          1     2
    ABCD          30          WKSTN 1          2     3
    ABCD          30          WKSTN 1          3     4
    A123          35          WKSTN 2          1     2
    A123          35          WKSTN 2          2     3
    A123          35          WKSTN 2          3     4
    B456          0          WKSTN 3          1     2
    B456          0          WKSTN 3          2     3
    B456          0          WKSTN 3          3     4
    ????          5          NULL          1     2
    ????          5          NULL          2     3
    ????          5          NULL          3     4I've tried a number of different things, googled my problem, and no luck yet...
    SELECT     t1.workstation
    ,     SUM(t1.my_value)
    ,     t2.wkstn_name
    ,     t3.my_nbr1
    ,     t3.my_nbr2
    FROM     my_tab1 t1
    ,     my_tab2 t2
    ,     my_tab3 t3
    ...So, what I want is a full outer join of t1 and t2 on workstation, and a cross-join of that with t3. I'm wondering if I can't find any examples of this online because it's not possible....
    Note: I'm stuck dealing with Oracle 8i
    Thanks!!

    Hi,
    The query I posted yesterday is a little more complicated than it needs to be.
    Since my_tab2.workstation is unique, there's no reason to do a separate sub-query like mt1; we can join my_tab1 to my_tab2 and get the SUM all in one sub-query.
    SELECT       foj.workstation
    ,       foj.sum_my_value
    ,       foj.wkstn_name
    ,       mt3.my_nbr1
    ,       mt3.my_nbr2
    FROM       (     -- Begin in-line view foj for full outer join
              SELECT        mt1.workstation
              ,        SUM (mt1.my_value)     AS sum_my_value
              ,        mt2.wkstn_name
              FROM        my_tab1   mt1
              ,        my_tab2   mt2
              WHERE        mt1.workstation     = mt2.workstation (+)
              GROUP BY   mt1.workstation
              ,        mt2.wkstn_name
                    UNION ALL
              SELECT      workstation
              ,      0      AS sum_my_value
              ,      wkstn_name
              FROM      my_tab2
              WHERE      workstation     NOT IN (     -- Begin NOT IN sub-query
                                               SELECT      workstation
                                       FROM      my_tab1
                                       WHERE      workstation     IS NOT NULL
                                     )     -- End NOT IN sub-query
           ) foj     -- End in-line view foj for full outer join
    ,       my_tab3  mt3
    ORDER BY  foj.wkstn_name
    ,       foj.workstation
    ,       mt3.my_nbr1
    ,       mt3.my_nbr2
    ;Thanks for posting the CREATE TABLE and INSERT statements, as well as the very clear desired results!
    user11033437 wrote:
    ... So, what I want is a full outer join of t1 and t2 on workstation, and a cross-join of that with t3. That it, exactly!
    The tricky part is how and when to get SUM (my_value). You might approach this by figuring out exactly what my_tab3 has to be cross-joined to; that is, exactly what should the result set of the full outer join between my_tab1 and my_tab2 look like. To do that, take your desired results, remove the columns that do not come from the full outer join, and remove the duplicate rows. You'll get:
    workstation     sum(my_value)     wkstn_name
    ABCD          30          WKSTN 1          
    A123          35          WKSTN 2          
    B456          0          WKSTN 3          
    ????          5          NULL          So the core of the problem is how to get these results from my_tab1 and my_tab2, which is done in sub-query foj above.
    I tried to use self-documenting names in my code. I hope you can understand it.
    I could spend hours explaining different parts of this query in more detail, but I'm sure I'd waste some of that time explaining things you already understand. If you want an explanation of somthing(s) specific, let me know.

  • Language specific characters changes when export as csv

    I am using the Save as CSV file property of the grid to show the data in excel. But in CSV some spanish characters like í is getting changed to Ã.
    Page is having .irpt extension.
    I tried .html as well but it is giving the same error.
    Could anyone help me in resolving it.

    Hi
    This is most probably a character set issue. The problem here is that a CSV file, unlike XML, does not contain a declaration of the character set, so there is uncertainty here.
    It is possible that the program you are using to open/import the CSV file is using the wrong character set. There is a tool called the chardet library that may help you determine what the encoding is. See http://stackoverflow.com/questions/508558/what-charset-does-microsoft-excel-use-when-saving-files for these and other details.
    Otherwise, if your CSV comes from a query against a database, the database collation may not be aligned with the character data you are storing.
    Marc

  • Language specific characters problem

    My users are to enter text to be stored in an Oracle DB. The insert statement works perfectly, but all Norwegian characters are converted to something unreadable in the insert prosess. How do I get the flex application and runtime environment to not convert these characters?
    Doing inserts from other applications to the same DB works for Norwegian characters so this problem seems to be related to VC.
    Hoping for help
    Henning

    I was wrong! When I found that BAPIs doing inserts to SAP tables worked with Norwegian characters, I thought I had found the solution. This is not the case. The same model does not work with Norwegian characters when backported to NW 2004. Nor does my bi_jdbc sql based models reading/writing to the portal db.
    Will update if/when solution is found.
    Henning

Maybe you are looking for