Jolt STRING local character set

Our application sends data containing non-latin characters in FML32 STRING fields. What I found in documentation is:
For localization, the Jolt Class Library package relies on the conventions of the Java language and the Oracle Tuxedo system. Jolt transfers Java 16-bit Unicode characters to the JSH. The JSH provides a mechanism to convert Unicode to the local character set.
So how does JSH know the "local character set"?

tutti_frutti:
You may want to use the following:
String input = textfield.getText();
int len = 0;
char curr_char;
if(input != null) {
    len = input.length();
    for( int i = 0; i < len; i++) {
        curr_char = input.charAt(i);
        if(Character.UnicodeBlock.of(curr_char) == Character.UnicodeBlock.ARABIC) {
            // Do something
        else if(Character.UnicodeBlock.of(curr_char) == Character.UnicodeBlock.BASIC_LATIN) {
            // Do some other thing
}

Similar Messages

  • Std::string NLS_LANG character sets

    I'm an OCI user but not a pure one. Because I use the free available OTL (Oracle Template Library) from S.Kuchin which is a wrapper around OIC I hope this not off topic here.
    The library offers the possibility to read database strings from VARCHAR2 fields to
    a std::string. I know that oracle does character converting at client side controlled
    via NLS_LANG environment variable.
    It's clear to me that reading database strings to std::string is no problem for
    one byte character sets liike ISO-8859-1. But how about when I let point NLS_LANG
    to UTF-8 or chinese character set e.g. ZHT16BIG5 ?
    Is it still safe to read the result to a std::string ?
    For example I have a database with default characterset AMERICAN_AMERICA.WE8ISO8859P15. I stored some German umlaut in some
    table. I wrote a small program using OTL and set NLS_LANG to UTF8 on client side.
    I fetched the data from server to client, stored the data in a std::string, pushed them in a file and yes the data were stored as UTF8.
    Is it really so simple or is it dangerous to read the UTF8-converted data to
    std::string ? Wat is the common rule ? When may and when may I not read the data
    to std::string ?

    Hello,
    I think you'll have more accurate answer in the Globalization Support Forum:
    Globalization Support
    Best regards,
    Jean-Valentin

  • How to set or change character set for Oracle 10 XE

    Installing via RPM on Linux.
    I need to have my database set to use UTF8 and WE8ISO8859P15 as the character set and national character set. (Think those are in the right order. If not, it's the opposite.)
    If I do a standard "yum localinstall rpm-file-name," it installs Oracle. I then run the "/etc/init.d/oracle-xe configure" command to set my ports.
    Every time I do this, I end up with AL32/AL16 character sets.
    I finally hardcoded ISO-8859-15 as the Linux 'locale' character set and set this in the various bash profile config files. Now, I end up with WE8MSWIN1252 as the character set and AL16UTF16 as the national character set.
    I've tried editing the createdb.sh script to hard code the character set types and then copied that file over the original while the RPM is still installing. I've tried editing the nls_lang.sh script to hard code the settings there and copied over the original shell script while the RPM is still installing.
    Doesn't matter.
    HOW can I do this? If I wait until after the RPM is installed and try running the createdb.sh file, then it ends up creating a database but not doing everything properly. I end up missing pfiles or spfiles. Various errors crop up.
    If I try to change them from the sql command line, I am told that the new character set must be a superset of the old one. It fails.
    I'm new to Oracle, so I'm treading water that's uncharted. In short, I need community help. It's important to the app I'm running and attempting to migrate from to maintain these character sets.
    Thanks.

    I don't think you can change Oracle XE character set. When downloading Oracle XE you must choose to download:
    - either the Universal Edition using AL32UTF8
    - or the Western Euopean Edition using WE8MSWIN1252.
    See http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABJACJJ
    If you really need UTF8 instead of AL32UTF8 you need to use Oracle Standard Edition or Oracle Entreprise Edition:
    these editions allow to select database character set at database creation time which is not really possible with Oracle XE
    Note that changing environment variable NLS_LANG has nothing to do with changing database character set:
    http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABBGFIC

  • How to set Multi Byte Character Set ( MBCS ) to Particular String In MFC VC++

    I Use Unicode Character Set in my MFC Application ( VC++) .
    now i get the output   ठ桔湡潹⁵潦⁲獵 (like this )character and i want to convert this character in english language (means MBCS),
    But i need Unicode to My Applicatiion. when i change the Multi-Byte Character set It give Correct output in English but other Objects ( TreeCtrl Selection ) will perform wrongly .  so i need to convert the particular String to MBCS
    how can i do that ? In MFC

    I assume your string read from your hardware device is an plains "C" string (ANSI string). This type of string has one byte per character. Unicode has two bytes per character.
    From the situation you explained I'd convert the string returned by the hardware to an Unicode string using i.e. MultibyteTowideChar with CP_ACP. You may also use mbstowcs or some similar functions to convert your string to an Unicode string.
    Best regards
    Bordon
    Note: Posted code pieces may not have a good programming style and may not perfect. It is also possible that they do not work in all situations. Code pieces are only indended to explain something particualar.

  • Windows "Regional Options" locale - JCE for 8bit vs 16bit character sets

    I have a Java application that reads in an Encrypted text file. The text file was Encrypted using JCE 1.2.1 and Encrypted on a Windows system with the locale set to English(US). The Encryption uses Sun's version of the DES encryption algorithm.
    This app reads in the Encrypted text file and Decrypts it and processes it's information.
    This works fine on Windows systems if the Regional Options control panel is set to a region that uses 8bit character sets:
    - English
    - Italian
    - Spanish
    - French
    But, if the locale is set to 16bit character set regions, the text file cannot be read and parsed. Such regions include:
    - Russian
    - Greek
    - English (Hong Kong)
    At this point, I think I have two options, but I would love to hear about more:
    - Edit the Encrypting/Decrypting code (or the parsing code - parses through a comma deliminated file) so that the file that is Encrypted and Decrypted can handle either an 8bit or 16bit character sets
    (Don't know how to do this)
    or
    - Programatically change the locale of Windows machines to English(US) at application start-up and then change it back to the previous locale setting on application shut-down
    (Don't know how to do this either)
    I'd appreciate any help. I'm not sure if this is an International issue or an JCE issue.
    Thanks in advance

    I found an answer to the problem I was having.
    The culprit were two special characters that the client was using in the encrypted text to distinguish between different fields and to distinguish carriage returns (� and �). The non Latin alphabet languages didn't know what to do with those characters so they substituted there own characters, thus breaking the parsing logic which was hard coded to look for � and �.
    The problem also was related to the fact that the JCE works with byte[] arrays. FileInputStreams (which deal with byte[] arrays) seem to convert the special characters to new characters in non Lating languages to match what was going on in the JCE logic.
    The easiest fix I could come up with, was to include a new properties file to be read by a separate FileInputStream. This properties text file contained just two characters (��). When I loaded in this new properties file via a FileInputStream, the two characters (��) in the properties file magically change to match the currently active alphabet (or didn't change at all if the computer was using a Latin alphabet).
    By checking the new properties file to see what the characters had changed to (if they had), I was able to know what to use to parse the encrypted data. And as such, regardless of what language the computer was set to, the encrypted data is now parsed correctly, as I took out the hard coding that looked specifically for the characters � and � and instead rewrote the code so it now uses the characters from the properties file (or whatever characters they change to) for parsing the content data.
    I hope others find this useful.

  • Windows "Regional Options" locale (8bit vs 16bit character sets)

    I have a Java application that reads in an Encrypted text file. The text file was Encrypted using JCE 1.2.1 and Encrypted on a Windows system with the locale set to English(US). The Encryption uses Sun's version of the DES encryption algorithm.
    This app reads in the Encrypted text file and Decrypts it and processes it's information.
    This works fine on Windows systems if the Regional Options control panel is set to a region that uses 8bit character sets:
    - English
    - Italian
    - Spanish
    - French
    But, if the locale is set to 16bit character set regions, the text file cannot be read and parsed. Such regions include:
    - Russian
    - Greek
    - English (Hong Kong)
    At this point, I think I have two options, but I would love to hear about more:
    - Edit the Encrypting/Decrypting code (or the parsing code - parses through a comma deliminated file) so that the file that is Encrypted and Decrypted can handle either an 8bit or 16bit character sets
    (Don't know how to do this)
    or
    - Programatically change the locale of Windows machines to English(US) at application start-up and then change it back to the previous locale setting on application shut-down
    (Don't know how to do this either)
    I'd appreciate any help. I'm not sure if this is an International issue or an JCE issue.
    Thanks in advance

    I found an answer to the problem I was having.
    The culprit were two special characters that the client was using in the encrypted text to distinguish between different fields and to distinguish carriage returns (� and �). The non Latin alphabet languages didn't know what to do with those characters so they substituted there own characters, thus breaking the parsing logic which was hard coded to look for � and �.
    The problem also was related to the fact that the JCE works with byte[] arrays. FileInputStreams (which deal with byte[] arrays) seem to convert the special characters to new characters in non Lating languages to match what was going on in the JCE logic.
    The easiest fix I could come up with, was to include a new properties file to be read by a separate FileInputStream. This properties text file contained just two characters (��). When I loaded in this new properties file via a FileInputStream, the two characters (��) in the properties file magically change to match the currently active alphabet (or didn't change at all if the computer was using a Latin alphabet).
    By checking the new properties file to see what the characters had changed to (if they had), I was able to know what to use to parse the encrypted data. And as such, regardless of what language the computer was set to, the encrypted data is now parsed correctly, as I took out the hard coding that looked specifically for the characters � and � and instead rewrote the code so it now uses the characters from the properties file (or whatever characters they change to) for parsing the content data.
    I hope others find this useful.

  • Validate that a string is composed from a defined character set

    Hi experts,
    I need to validate that a string enetered as parameter is composed of the following character set :
    26 alphabets (both in capital or lower case), a “’” (like O’Brien), a blank between characters like (Mc Donald). So the total valid characters on any of these fields will be 54.
    could You provide with an efficient code ?
    Plz Help....
    rewards gauranteed........

    Hi,
    Check the below code.
    data: var(52) type c Value  'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'.
    data: v1(14) type c value 'welcome to SDN'.
    data: v2 type i.
    data: v3 type i.
      v3 = 0.
      v2 = STRLEN( v1 ).
      do v2 times.
       if v1+v3(1) = space .
        write:/ 'test contains character space'.
       elseif v1+v3(1) ca var.
        write:/ 'test'.
       elseif v1(v3) = '/'.
        write:/ 'test contains character /'.
       endif.
         v3 = v3 + 1.
      enddo.
    Regards,
    Shravan G.

  • Client-side greek character set problem

    hi everyone,
    i am a .net developer and absolutelly new to oracle, its my first project that i have make oracle and .net co-operate and it's proving to be a nightmare!
    i ll try and provide as much info as possible to you
    we have a unix sap server with oracle 10.something on it (i can check exactly what version if that's of importance) and i am writting a .net app which accesses the oracle db and retrieve some data we need
    the locale settings of the db are american_america.we8dec and i cannot temper with that machine, it is out of the question
    i have made the connection to the db using just connection string info, not tns and stuff, and i have set my dev machine's nls_lang to we8dec everywhere i could find it with a 'find' in the registry
    however my app displays the data (which is in greek) showing random symbols that obviously make to sense
    furthermore when i connect to the db via sql developer or navicat i still get the same symbols
    i understand it is something to do with my client system's settings but i cant work it out
    i would really appreciate your help i am sorry if it is something very simple but i just cant find it
    thanks

    user9084006 wrote:
    i am a .net developer and absolutelly new to oracle, its my first project that i have make oracle and .net co-operate and it's proving to be a nightmare!Welcome. It's a brand new universe, but if you take it step by step I'm sure you'll do fine. Get someone near with Oracle dev and dba background to guide you, so you don't get stuck or go down broken roads.
    we have a unix sap server with oracle 10.something on it (i can check exactly what version if that's of importance)It is of importance most of the time. Down to at least 4 positions (for example, "10g" is just a marketing label and mostly useless in a techincal context - 10.2.0.5 is more like it).
    Also mention platform (unix OS) details.
    and i am writting a .net app which accesses the oracle db and retrieve some data we needHow is the to-be retrieved data loaded or entered to begin?
    >
    the locale settings of the db are american_america.we8dec and i cannot temper with that machine, it is out of the question Please post output from:
    SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';
    and i have set my dev machine's nls_lang to we8dec everywhere i could find it with a 'find' in the registryDoes your dev (client) machine use a "we8dec" character set? Likely, as you seem to be on Windows, you should have set nls_lang client char set part to match win-1252 or whatever client char set (code page) is in use.
    however my app displays the data (which is in greek) showing random symbols that obviously make to senseNo surprise since DEC character set does not have a greek character in its repertoire. If I'm not all mistaken DEC MCS (or similar) is the charcter set to which WE8DEC corresponds. You could check against mapping table in Locale builder, but available characters should be per following link or close enough.
    http://czyborra.com/charsets/iso8859.html#ISO-8859-1 (second chart is DEC-MCS)
    Please provide a sample of those "random symbols".
    furthermore when i connect to the db via sql developer or navicat i still get the same symbolsIf that's Oracle SQL Developer it should give a true picture of what's actually stored - which, unfortuantely, seem to be invalid character data.
    i understand it is something to do with my client system's settings but i cant work it out See above. But even if you correctly setup client environment, the db won't be able to store the data if databas character set is in fact WE8DEC.
    >
    i would really appreciate your help i am sorry if it is something very simple but i just cant find it I'm not sure, but it would seem as you have been given unfeasible conditions to work from. So maybe it's not up to you, until a character set migration has been taken place.
    In general, the Database character set should be a suitable superset (repertoire) that covers all current (and hopefully future) language alphabets requirements.
    You might want to search forum for 'we8dec' to find previous related discussions.
    Edit:
    - Added url with DEC MCS.
    - platform info
    Edited by: orafad on Jan 26, 2012 12:28 PM
    Edited by: orafad on Jan 26, 2012 12:43 PM

  • Character Set questions on setup

    I am trying to determine what the best setup recommendations are for creating non_English Oracle 10g databases. I have not had much experience building databases for non_English locales, so this is getting a little overwhelming as I have been researching Oracle's Database Globalization Support Guide. Obviously it has a wealth of information and I am trying to determine what applies to us at this point and time.
    Generally when someone buys our product they create a new Oracle instance for our app. I need to be able to recommend proper database settings/parameters for potential global customers who purchase our software to run on Oracle.
    Currently my biggest question is what to recommend for the Database Characterset on db creation. Currently the DB Character Set we recommend (for standard U.S. installs on Windows) is the default WE8MSWIN1252 character set. Our application is non-unicode. It has been recommended to me from an outside consultant that we "must" use UTF-8 for DB and National Character Set settings, as opposed to WE8MSWIN1252 or WE8ISO8859P1. I should mention that our focus at this point and time is getting a solution for French, German, and Spanish. We are also more concerned about a single language setup than multilanguage - although that is a definite future consideration.
    What impact can using UTF-8 as opposed to WE8MSWIN1252 or WE8ISO8859P1 have on a non-unicode application? I hope I am explaining the situation well enough as I am fairly new and still getting to know our application. I am kind of getting thrown into the i18n fire...
    Any input is greatly appreciated. Thanks.

    Your questions are certainly valid but you have not given any details about your application: what it does, what technologies and access drivers are employed, and what client operating systems are supported. This determines how much effort is required to make the application Unicode-enabled and what are the risks coming from each of the possible approaches.
    As long as your application can work with single-byte character sets only and as long as it is not expected to contain multibyte data, and as long as it supports Windows only, the Oracle character set corresponding to relevant Windows ANSI code page is the correct choice. For English, French, German, Spanish, and other Western European languages, WE8MSWIN1252 is right one.
    Processing of WE8MSWIN1252 is easier and somehow faster than processing AL32UTF8 (i.e. UTF-8) data. One character corresponds to one byte and this simplifies some aspects of text processing.
    On the other hand, world becomes smaller and smaller in the Internet area. Companies that never did any business abroad start to talk to customers around the world because somebody found their website. Western European companies take advantage of the European Union enlargement and start making business in new countries. Therefore, it is dangerous to assume that a company currently interested in a monolingual, single-byte solution will not want to migrate to a multilingual and multibyte solution in few years.
    If you follow a few rules in database design and programming, you can run your single-byte application against an AL32UTF8 database, even if you do not get a multilingual system in this way. Such configuration has the huge advantage of avoiding the need of a complex and resource consuming task of migrating the database character set to Unicode in future, when your customer asks for multilingual support. Upgrading binaries of your application to an Unicode-enabled version is usually fast, migrating the database character set is not.
    The main rules you should follow are:
    1) Use character length semantics to define column and PL/SQL variable lengths, i.e. say VARCHAR2(10 CHAR) instead of VARCHAR2(10 [BYTE]). If you do not want to modify all creation scripts to include the CHAR keyword, issue ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR at the beginning of each script. I recommend modifying the scripts.
    2) Do not use VARCHAR2 columns longer than 1000 characters, CHAR columns longer than 500 characters, and PL/SQL VARCHAR2/CHAR variables longer than 8190 characters. This guarantees that in the future no AL32UTF8 string will exceed the hard limit of 4000/2000/32760 bytes. Use CLOB for longer text.
    3) Use SUBSTR/LENGTH/INSTR in place of SUBSTRB/LENGTHB/INSTRB. Use SUBSTRB/LENGTHB/INSTRB only when dealing with legacy stuff or Data Dictionary that still use byte length semantics.
    4) Define the client setting - mainly NLS_LANG - to correctly correspond to the character set processed by your application.
    5) Modify interfaces to other databases, if any, to cope with the character length semantics. You do not have to do much if the other databases follow the same rules.
    The cost of running the database in Unicode is not high for most languages, though languages that do not use Latin script, such as Russian, Greek, or Japanese, need significantly more storage for the textual data (but only textual data in those languages - this is only some fraction of all data in the database). Processing is slower by a few percent as compared to single-byte character sets (unless a lot of textual processing is performed in the database, in which case the percentage may be higher - benchmark recommended). This costs can be usually compensated by adding some more computing power (GHz and disks). Unless your application needs a VLDB (very large database) and almost saturates the system, you should not notice a big difference.
    -- Sergiusz

  • Unrecognised Char in GB2312 character set using java InputStreamReader??

    Reading the following file chinese GB2312 html file from
    http://news.xinhuanet.com/local/2007-02/13/content_5732705.htm
    using the InputStreamReader with GB2312 encoding as shown below
    public class readGB2312html file
    //........TmpText declarations.....
    public static void main( String[] args )
    try
    FileInputStream is = new FileInputStream( args[0] );
    BufferedReader br = new BufferedReader(
    new InputStreamReader( is, "GB2312" ) );
    String strLine;
    while ( (strLine = br.readLine()) != null )
    TmpText.append(strLine);
    TmpText.append("\r\n");
    br.close();
    bw.close();
    catch ( Exception e )
    e.printStackTrace();
    The TmpText variable does not display the last character in the article properly &#65288;&#35760;&#32773;&#22799;&#29690;&#65289; it gives instead &#65288;&#35760;&#32773;&#22799;?B&#65289;
    Inside the html file the unrecognised charcter is represented by �B in the html file Why is this so
    ���������B��
    In the internet browser it is displayed and recognised as a chinese GB2312 character why not recognised by Java InputStreamReader???
    Any help or explanation would be much appreciated

    Yes, it is not a GB2312 character
    The �B character is AC40 in hex format which is outside of the GB2312 character range, it is in GBK
    Copied from wikipedia,
    GBK is an extension of the GB2312 character set for simplified Chinese characters, used in the People's Republic of China.
    GB stands for National Standard, while K stands for Extension. GBK not only extended the old standard GB2312 with Traditional Chinese characters, but also with Chinese characters that were simplified after the establishment of GB2312 in 1981. With the arrival of GBK, certain names with characters formerly unrepresentable, like the "rong" (�g) character in former Chinese Premier Zhu Rongji's name, are now representable.
    Thanks a lot will use the GBK charset to read the file for all GB2312 file since it is a subset of it.

  • Character set conversion warnings in Gtk apps

    Recently I noticed that various Gtk applications throw this warning (actually, many times, pretty much every time something in the window needs updating) when running them:
    (gimp:5803): Gdk-WARNING **: Error converting from UTF-8 to STRING: Conversion from character set 'UTF-8' to 'ISO-8859-1' is not supported
    (gvim:5697): Gdk-WARNING **: Error converting from UTF-8 to STRING: Could not open converter from 'UTF-8' to 'ISO-8859-1'
    These are examples with gimp and gvim, but the same goes for alunn, rox, firefox, gtk-chtheme, ...
    I tried to search various forums and google, but nobody seems to address this issue, and if, then it's along the lines "recompile everything and start with gtk2". That's not satisfactory for me. Especially, since some apps (like gimp) sometimes crash with this error.
    Some info: I run with LC_ALL=en_US.UTF-8 and LANG=en_US.UTF-8, and locales defined in locale.gen include en_US.UTF-8 and en_US ISO-8859-1, and I did generate the locales (locale-gen).
    Any help/ideas about what to do with this are appreciated.
    EDIT: couldn't it be a problem with the recent update of glibc (on 02/15)?
    Last edited by bender02 (2007-02-23 05:49:44)

    Well, talking to myself here - I just wanted to say that recent upgrade to glibc-2.5-5 seems to have solved the issue.

  • Character sets and ado

    I have a table with a clob field on an Oracle 8.1.7.4 database. When querying the clob field via odbc and ado the value is truncated. The Oracle server and client are using a WE8ISO8859P1 character set. Has anyone come across this before.
    Thanks.

    I believe the data should be able to be represented by IS0-8859. The data is a long random string of characters that represents a fingerprint image.
    We seem to only get 996 characters back from the database. If I do a getchunk on the data then I get 996 characters of data, then 996 NULLS, then 996 characters of data and so on. The 996 NULLS should be data.
    The data is in the database because I can do a dbms_lob.substr and get the correct info back.

  • How do you define which character set gets embedded with a font embedded in the library (i.e. Korean)?

    I have project that uses a shared fonts. The fonts are all
    contained in a single swf ("fonts.swf"), are embedded in that swf's
    library and are set to export for actionscript and runtime sharing.
    The text in the project is dynamic and is loaded in from
    external XML files. The text is formatted via styles contained in a
    CSS object.
    This project needs to be localized into 20 or so different
    languages.
    Everything works great with one exception: I can’t
    figure out how to set which character set gets exported for runtime
    sharing. i.e. I want to create a fonts.swf that contains Korean
    characters, change the XML based text to Korean and have the text
    display correctly.
    I’ve tried changing the language of my OS (WinXP) and
    re-exporting but that doesn’t work correctly. I’ve also
    tried adding substitute font keys to the registry (at:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
    NT\CurrentVersion\FontSubstitutes) as outlined here:
    http://www.quasimondo.com/archives/000211.php
    but the fonts I added did not show up in Flash's font menue.
    I’ve also tried the method outlined here:
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275
    to no avail.
    I know there must be a simple solution that will allow me to
    embed language specific character sets for the fonts embedded in
    the library but I have yet to discover what it is.
    Any insight would be greatly appreciated.
    http://www.quasimondo.com/archives/000211.php
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275

    Thanks Jim,
    I know that it is easy to specify the language you want to
    use when setting the embed font properties for a specific text
    field but my project has hundreds of text fields and I'm setting
    the font globally by referencing the font symbols in a single swf.
    I have looked at the info you've pointed out but wasn't
    helped by it. What I'd like to be able to do is to tell Flash to
    embed a language specific character-set for the font symbols in the
    library. It currently is only embedding Latin characters even
    though I know the fonts specified contains characters for other
    languages.
    For example. I have a font symbol in the libary named
    "Font1". When I look at its properties I can see it is spcified as
    Tahoma. I know the Tahoma font on my system contains the characters
    for Korean but when I compile the swf it only contains Latin
    characters (gylphs) - this corresponds to the language of my OS (US
    English). I want to know how to tell Flash to embedd the Korean
    language charaters rather than or as well as the Latin characters
    for any given FONT SYMBOL. If I could do that, then, when I enter
    Korean text into my XML files the correct characters will be
    available to Flash. As it is now, the characters are not available
    and thus the text doesn' t display.
    Make sense?
    Many thanks,
    Mike

  • How to Export local security setting all filed name & value against filed.

    HI all,
    I am trying to export local security setting from local policy using bellow scrip. but it is showing only these are configured. I need expert help which allowed me to export all filed with value where it is configure or not. Please give me.
    $output=@()
    $temp = "c:\"
    $file = "$temp\privs.txt"
    [string] $readableNames
    $process = [diagnostics.process]::Start("secedit.exe", "/export /cfg $file /areas USER_RIGHTS")
    $process.WaitForExit()
    $in = get-content $file
    foreach ($line in $in) {
    if ($line.StartsWith("Se")) {
    $privilege = $line.substring(0,$line.IndexOf("=") - 1)
    switch ($privilege){
    "SeCreateTokenPrivilege " {$privilege = "Create a token object"}
    "SeAssignPrimaryTokenPrivilege" {$privilege = "Replace a process-level token"}
    "SeLockMemoryPrivilege" {$privilege = "Lock pages in memory"}
    "SeIncreaseQuotaPrivilege" {$privilege = "Adjust memory quotas for a process"}
    "SeUnsolicitedInputPrivilege" {$privilege = "Load and unload device drivers"}
    "SeMachineAccountPrivilege" {$privilege = "Add workstations to domain"}
    "SeTcbPrivilege" {$privilege = "Act as part of the operating system"}
    "SeSecurityPrivilege" {$privilege = "Manage auditing and the security log"}
    "SeTakeOwnershipPrivilege" {$privilege = "Take ownership of files or other objects"}
    "SeLoadDriverPrivilege" {$privilege = "Load and unload device drivers"}
    "SeSystemProfilePrivilege" {$privilege = "Profile system performance"}
    "SeSystemtimePrivilege" {$privilege = "Change the system time"}
    "SeProfileSingleProcessPrivilege" {$privilege = "Profile single process"}
    "SeCreatePagefilePrivilege" {$privilege = "Create a pagefile"}
    "SeCreatePermanentPrivilege" {$privilege = "Create permanent shared objects"}
    "SeBackupPrivilege" {$privilege = "Back up files and directories"}
    "SeRestorePrivilege" {$privilege = "Restore files and directories"}
    "SeShutdownPrivilege" {$privilege = "Shut down the system"}
    "SeDebugPrivilege" {$privilege = "Debug programs"}
    "SeAuditPrivilege" {$privilege = "Generate security audit"}
    "SeSystemEnvironmentPrivilege" {$privilege = "Modify firmware environment values"}
    "SeChangeNotifyPrivilege" {$privilege = "Bypass traverse checking"}
    "SeRemoteShutdownPrivilege" {$privilege = "Force shutdown from a remote system"}
    "SeUndockPrivilege" {$privilege = "Remove computer from docking station"}
    "SeSyncAgentPrivilege" {$privilege = "Synchronize directory service data"}
    "SeEnableDelegationPrivilege" {$privilege = "Enable computer and user accounts to be trusted for delegation"}
    "SeManageVolumePrivilege" {$privilege = "Manage the files on a volume"}
    "SeImpersonatePrivilege" {$privilege = "Impersonate a client after authentication"}
    "SeCreateGlobalPrivilege" {$privilege = "Create global objects"}
    "SeTrustedCredManAccessPrivilege" {$privilege = "Access Credential Manager as a trusted caller"}
    "SeRelabelPrivilege" {$privilege = "Modify an object label"}
    "SeIncreaseWorkingSetPrivilege" {$privilege = "Increase a process working set"}
    "SeTimeZonePrivilege" {$privilege = "Change the time zone"}
    "SeCreateSymbolicLinkPrivilege" {$privilege = "Create symbolic links"}
    "SeDenyInteractiveLogonRight" {$privilege = "Deny local logon"}
    "SeRemoteInteractiveLogonRight" {$privilege = "Allow logon through Terminal Services"}
    "SeServiceLogonRight" {$privilege = "Logon as a service"}
    "SeIncreaseBasePriorityPrivilege" {$privilege = "Increase scheduling priority"}
    "SeBatchLogonRight" {$privilege = "Log on as a batch job"}
    "SeInteractiveLogonRight" {$privilege = "Log on locally"}
    "SeDenyNetworkLogonRight" {$privilege = "Deny Access to this computer from the network"}
    "SeNetworkLogonRight" {$privilege = "Access this Computer from the Network"}
      $sids = $line.substring($line.IndexOf("=") + 1,$line.Length - ($line.IndexOf("=") + 1))
      $sids =  $sids.Trim() -split ","
      $readableNames = ""
      foreach ($str in $sids){
        $str = $str.substring(1)
        $sid = new-object System.Security.Principal.SecurityIdentifier($str)
        $readableName = $sid.Translate([System.Security.Principal.NTAccount])
        $readableNames = $readableNames + $readableName.Value + ", "
    $output += New-Object PSObject -Property @{            
            privilege       = $privilege               
            readableNames   = $readableNames.substring(0,($readableNames.Length - 1))
            #else            = $line."property" 
    $output  

    As an alternate approach wee can preset the hash and just update it.  This version also deal with trapping the errors.
    function Get-UserRights{
    Param(
    [string]$tempfile="$env:TEMP\secedit.ini"
    $p=Start-Process 'secedit.exe' -ArgumentList "/export /cfg $tempfile /areas USER_RIGHTS" -NoNewWindow -Wait -PassThru
    if($p.ExitCode -ne 0){
    Write-Error "SECEDIT exited with error:$($p.ExitCode)"
    return
    $selines=get-content $tempfile|?{$_ -match '^Se'}
    Remove-Item $tempfile -EA 0
    $dct=$selines | ConvertFrom-StringData
    $hash=@{
    SeCreateTokenPrivilege =$null
    SeAssignPrimaryTokenPrivilege=$null
    SeLockMemoryPrivilege=$null
    SeIncreaseQuotaPrivilege=$null
    SeUnsolicitedInputPrivilege=$null
    SeMachineAccountPrivilege=$null
    SeTcbPrivilege=$null
    SeSecurityPrivilege=$null
    SeTakeOwnershipPrivilege=$null
    SeLoadDriverPrivilege=$null
    SeSystemProfilePrivilege=$null
    SeSystemtimePrivilege=$null
    SeProfileSingleProcessPrivilege=$null
    SeCreatePagefilePrivilege=$null
    SeCreatePermanentPrivilege=$null
    SeBackupPrivilege=$null
    SeRestorePrivilege=$null
    SeShutdownPrivilege=$null
    SeDebugPrivilege=$null
    SeAuditPrivilege=$null
    SeSystemEnvironmentPrivilege=$null
    SeChangeNotifyPrivilege=$null
    SeRemoteShutdownPrivilege=$null
    SeUndockPrivilege=$null
    SeSyncAgentPrivilege=$null
    SeEnableDelegationPrivilege=$null
    SeManageVolumePrivilege=$null
    SeImpersonatePrivilege=$null
    SeCreateGlobalPrivilege=$null
    SeTrustedCredManAccessPrivilege=$null
    SeRelabelPrivilege=$null
    SeIncreaseWorkingSetPrivilege=$null
    SeTimeZonePrivilege=$null
    SeCreateSymbolicLinkPrivilege=$null
    SeDenyInteractiveLogonRight=$null
    SeRemoteInteractiveLogonRight=$null
    SeServiceLogonRight=$null
    SeIncreaseBasePriorityPrivilege=$null
    SeBatchLogonRight=$null
    SeInteractiveLogonRight=$null
    SeDenyNetworkLogonRight=$null
    SeNetworkLogonRight=$null
    for($i=0;$i -lt $dct.Count;$i++){
    $hash[$dct.keys[$i]]=$dct.Values[$i].Split(',')
    $privileges=New-Object PsObject -Property $hash
    $privileges
    Get-UserRights
    A full version would be pipelined and remoted or, perhaps use a workflow to access remote machines in parallel.
    ¯\_(ツ)_/¯

  • Changing Character set in SAP BODS Data Transport

    Hi Experts,
    I am facing issue in extracting data from SAP.
    Job details: I am using an ABAP data Flow which fetches the data from SAP and loads into Oracle table using Data Transport.
    Its giving me below error while executing my job:
    (12.2) 05-06-11 11:54:30 (W) (3884:2944) FIL-080102: |Data flow DF_SAP_EXTRACT_QMMA|Transform R3_QMMA_EXTRACT__AL_ReadFileMT_Process
                                                         End of file was found without reading a complete row for file <D:/DataService/SAP/Local/Z_R3_QMMA>. The expected number of
                                                         columns was <30> while the number of columns actually read was <10>. Please check the input file for errors or verify the
                                                         schema specification for the file format. The number of rows processed was <8870>.
    reason: When analyzed I found the reason for this is presence of special characters in data. So while generating the data file in SAP working directory which is available on SAP Application server the SAP code page is 1100 due to which the delimeter of the file and the special characters are represented with #. So once the ABAP is executed and data is read from the file it is treating the # as delimiter and throwing the above error.
    I tried to replace the special characters in ABAP data Flow but the ABAP data Flow doesnot support replace_substr function. I also tried changing the Code Page value to UTF-8 in SAP datastore properties but this didnt work as well.
    Please let  me know what needs to be done to resolve this issue. Is there any way we change the character set while reading from the generated data file in BODS to convert code page 1100 to UTF-8.
    Thanks in advance.
    Regards,
    Sudheer.

    Unfortunately, I am no longer working on this particular project/problem. What I did discover though, is that /127 actually refers to character <control>+<backspace>. (http://en.wikipedia.org/wiki/Delete_character)
    In SAP this and any other unknown characters get converted to #.
    The conclusion I came to at the time, was that these characters made their way into the actual data and was causing the issue. In fact I think it is still causing the issue, since no one takes responsibility for changing the records, even after being told exactly which records need to be updated ;-)
    I think I did try to make the changes on the above mentioned file, but without success.

Maybe you are looking for