Character Validation Performance

I need to validate some strings that I'll be writing to files. I need to validate that the strings only contain the following characters:
{'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',
'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F',
'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V',
'W', 'X', 'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.',
'-', '(', ')', '/', '=', ',', ' '}The only way I can think to do this is to convert each string into a char[], and then loop through the array to see if each character is valid, but I'm worried that this is going to give really bad performance. What's the fastest way to validate the strings?
p.s I can't use 1.4 so the regex package isn't an option and I can't validate the Strings on input either (they're from an external source).

After writing a test class I found that I was probably over concerned about the performance hit:
public class Tester {
  private static final char[] valid = new char[] {'a', 'b', 'c', 'd', 'e',
       'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
       'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I',
       'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
       'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ' ', '.', '-',
       '(', ')', '/', '=', ','};
  private static final String invalidString =
       "a!a?s$f%f^r&g*t(erfe)_ee+}ee{er~yu@:?k>kik<uluil|oo#454'.;jy5[";
  public Tester() {     
    final long start = System.currentTimeMillis();
    for (int x = 0; x < 100000; x++) validate(invalidString);
    System.out.println(System.currentTimeMillis() - start);     
  public static void main(String[] args) {
    new Tester();
  private String validate(String s) {
    char[] validate = s.toCharArray();
    for (int x = 0; x < validate.length; x++) {
      boolean found = false;
      final char c = validate[x];
      for (int y = 0; y < valid.length; y++) {
        if (c == valid[y]) {
          found = true;
          break;
      if (!found) validate[x] = ' ';
    return String.valueOf(validate);
}I found that 100000 iterations took approx 1400 ms (jdk 1.3). Is there an easy way to make that method any faster?

Similar Messages

  • How to create security character validation to avoid automated programs

    Hi All,
    I am developing a web page for creating user accounts. I would like to do the security character validation to avoid the misuse from automated programs.
    This is what you may have seen on all popular web pages where you are asked to enter a scrambled set of characters into a text field before submitting your request.
    Please provide any samples or direct me where I can get some help on this.
    Your help is greatly appreciated.
    TIA,
    CK

    Why look for samples when u can can use ur own logic and thinking ?
    My approach ...
    1) Create a lot of scambled images having alphabets like 'rhoit' , 'rohit' ..
    File naming convention should be same as the text. Ex: rhoit.jpg, rohit.jpg ...
    2) Add the images names into a table and then populate a HashMap / ArrayList containing the image names
    3) Randomly pick some artibitrary element from the Arraylist and display the corresponding image in the create user accounts page.
    3) Compare the image name with the inputed text value for a match. The expected value should be set in request parameter.
    4) Handle the success/failure condition.
    I hope that should work fine.
    Regards
    -Rohit

  • Special character Validation: error

    HI Guys,
    I wrote a validation to restict the use of special chracters.
    Its a formula based validation, both REAL TIME and BATCH, on the Node.
    HasCharacters(Custom.EPMA_Alias, ["]).
    However, when I go to put an Alias on a member, it tells me there is a special character in this, but there is not.
    I also tried HasCharacters(PropValueCustom.EPMA_Alias0, ["]) with same results.
    I also need to add the following characters:
    |
    Thank you for all your help!.

    Hi,
    DRM will not support the syntax ["], instead try somethinug like this for your validation logic-
    And(
    Equals(Integer,Length(PropValue(Custom.Prop)),Length(ReplaceStr(PropValue(Custom.Prop),|,,T))),
    Equals(Integer,Length(PropValue(Custom.Prop)),Length(ReplaceStr(PropValue(Custom.Prop),",,T))),
    Equals(Integer,Length(PropValue(Custom.Prop)),Length(ReplaceStr(PropValue(Custom.Prop),%,,T))),
    Equals(Integer,Length(PropValue(Custom.Prop)),Length(ReplaceStr(PropValue(Custom.Prop),*,,T)))

  • Java character validation

    Hi there,
    I know this question may seem very elementary, but I was wondering. Would anyone know the opposite of:
    Character.isDigit(myVariable.charAt(0))I tried using this below but it didn't do anything:
    !Character.isDigit(myVariable.charAt(0))Is there something that I am doing wrong?
    Thanks

    Hi,
    It seems that I have discovered what the issue was. I was actually using the above code in a conditional in collaboration with a string length check such as the one below:
    (myVariable.length() != 1) && (!Character.isDigit(MyVariable.charAt(0)))It was however not validating for a single letter that was typed. What I should have done as I later discovered was
    (myVariable.length() != 1) || (!Character.isDigit(MyVariable.charAt(0)))So that it validates either condition and does not exhibit weird behaviour. However, I wonder if someone could please explain why in the previous syntax it was not validating for a single letter typed.

  • Japanese character validation

    Hi guys,
    The app I'm working on needs to support some entity names containing either Japanese characters or English characters. I need a validator method that reports whether the name that the user chose is valid or not.
    Here's a snippet:
    static boolean validate(String nameToValidate) {
           char[] arr = nameToValidate.toCharArray();
           for (int i = 0; i < arr.length; i++) {
                   if (!Character.isLetterOrDigit(arr))
                        return false;
    return true;
    That's ok if my name supports only letters and digits, but it should also support ':' (colon), '_' (underscore), '-'(dash) and '.' (dot), AND their Japanese versions. How should I do the comparisons in this case to make sure arr[i] is an accepted character? Using == '-' is obviously not an option, and I'd hate it if I had to depend on some hardcoded Unicode values. Any ideas?
    Thanks!

    Idea: take all allowed characters, save the String to a properties file, and check for each letter of the username whether it's in that String.

  • SOLVED: XSD validation performance disaster

    Is it possible to speed-up XMLType schemaValidate() and isSchemaValid() functions somehow?
    The following statement take ~one second of CPU time per table row on Pentium4:
    update tbl set is_valid = func(xml);
    where func(xml) is begin return xml.isSchemaValid('schema.xsd'); end;
    Looks like like full XSD re-parse for every row.
    The xml is 600 bytes long on average and schema is 40k formated text. The XML column is non-schema based XMLType. The schema name is constant, but I need to process relatively large number of xml fragments.
    Server is 10.2.0.3 so bug 5099703 does not apply.
    I see two complimentary options:
    - load Xerces2-J and validate with grammar caching, and hope it may be faster;
    - split the work between dbms_scheduler one-shot jobs to do processing in parallel, or maybe just parallel update will be enough.
    But keeping it simple would be the best option at all.

    I hope my solution will be in help for someone struggling with pure performance of PL/SQL XMLType validation functions.
    The performance in my case is two orders of magnitute better now: 80 rows/sec instead of 1 row/sec.
    I created custom function that could be used as:
    update messages set xml_is_valid =
        decode(xml_validator.isSchemaValid(xmlserialize(content xml), 'KRFileR.xsd'), 1, 'Y', 'N') The XML_VALIDATOR package is:
    create or replace package xml_validator as
    function isSchemaValid(xml clob, schema varchar2) return number;
    procedure schemaValidate(xml clob, schema varchar2);
    end;
    show err
    create or replace package body xml_validator as
    function isSchemaValid(xml clob, schema varchar2) return number
        as language java
        name 'xmlvalidator.XMLValidator.isSchemaValid(
            oracle.sql.CLOB,
            java.lang.String
            ) return int';
    procedure schemaValidate(xml clob, schema varchar2)
        as language java
        name 'xmlvalidator.XMLValidator.schemaValidate(
            oracle.sql.CLOB,
            java.lang.String
    end;
    show errThe XMLValidator class itself:
    package xmlvalidator;
    import java.io.IOException;
    import java.io.ByteArrayOutputStream;
    import java.io.StringReader;
    import java.util.HashMap;
    import java.sql.SQLException;
    import oracle.xml.parser.v2.*;
    import oracle.xml.parser.schema.*;
    import oracle.sql.CLOB;
    import org.xml.sax.EntityResolver;
    import org.xml.sax.InputSource;
    import org.xml.sax.SAXException;
    public class XMLValidator {
      //protected static HashMap<String, DOMParser> cache = new HashMap<String, DOMParser>();
        protected static HashMap cache = new HashMap(); // oracle java is 1.4
        protected static EntityResolver resolver = new OracleSchemaEntityResolver();
        protected static ByteArrayOutputStream errors = new ByteArrayOutputStream();
        public static String getErrors() {
            return errors.toString();
        // mimick XMLType functions semantic
        public static int isSchemaValid(CLOB xml, String schema) {
            try {
                schemaValidate(xml, schema);
            } catch (Exception e) {
                return 0;
            return 1;
        public static void schemaValidate(CLOB xml, String schema)
                throws XMLParseException, SAXException, IOException, XSDException, SQLException {
          //DOMParser parser = cache.get(schema);
            DOMParser parser = (DOMParser)cache.get(schema);
            if (parser == null) {
                XSDBuilder xsdbuild = new XSDBuilder();
                xsdbuild.setEntityResolver(resolver);
                XMLSchema xmlschema = xsdbuild.build(resolver.resolveEntity(null, schema));
                parser = new DOMParser();
                parser.setXMLSchema(xmlschema);
                parser.setValidationMode(XMLParser.SCHEMA_VALIDATION);
                parser.setErrorStream(errors);
                cache.put(schema, parser); // the whole parser is cached to avoid unknown
                                           // hidden costs of creating new parser
            errors.reset();
            parser.reset();
            parser.parse(new InputSource(xml.getCharacterStream()));
            parser.reset();
    }The OracleSchemaEntityResolver class to pull schema definition from Oracle registered schemas:
    package xmlvalidator;
    import oracle.xml.parser.schema.*;
    import oracle.xml.parser.v2.*;
    import java.net.*;
    import java.io.*;
    import org.w3c.dom.*;
    import org.xml.sax.*;
    import java.util.*;
    import java.sql.*;
    import oracle.jdbc.*;
    public class OracleSchemaEntityResolver implements EntityResolver {
        protected static Connection conn;
        protected static PreparedStatement stmt;
        static {
            try {
                conn = DriverManager.getConnection("jdbc:default:connection:");
                stmt = conn.prepareStatement(
                    "select xmlserialize(document schema) schema from user_xml_schemas where schema_url = ?"
            } catch(Exception e) { throw new RuntimeException(e); }
        public InputSource resolveEntity(String targetNS, String systemId)
                throws SAXException, IOException {
            Reader r;
          //System.out.println("resolving: " + targetNS + ":" + systemId);
            try {
                stmt.setString(1, systemId);
                ResultSet rs = stmt.executeQuery();
                rs.next();
                r = rs.getCharacterStream(1);
                rs.close(); // CLOB locator should be still valid even after ResultSet is closed
            } catch(SQLException e) { throw new RuntimeException(e); }
            InputSource src = new InputSource(r);
            src.setSystemId(systemId);
            return src;
    }Create JAR with makejar.bat:
    del XMLValidator.jar
    cd XMLValidator/src
    jar cf ../../XMLValidator.jar xmlvalidator/XMLValidator.java xmlvalidator/OracleSchemaEntityResolver.java
    pauseLoad into database with load.bat:
    set u=scott/qwerty@//server.domain.com:1521/orcl
    set r="((* SCOTT)(* PUBLIC)(* SYS))"
    call loadjava -thin -force -user %u% -resolver %r% -resolve XMLValidator.jar
    pauseEnjoy!

  • Special Character Validation and Reset Focus

    Hi All,
    I have developed an Webdynpro JAVA Application which carries many input Fields.
    Client side Validations needs to be handled for Special Character Restriction for each of the Fields.
    Validation handled through OnEnter or OnChange Events of a Particular Field shows the exception but clicking other fields made the exception ignored.
    My Requirement goes this way.
    field1
    field2
    field n
    Validation Button
    On click of Validation button exception should be raised for all the Fields that has invalid Characters .
    It Should be displayed in Message Area( Using MessageManager) which should carry Field Name causing the Exception. On clicking the Exception it should take the Cursor the Erroneous Field.
    Kindly share the Methods of any standard Interface which does this job or customized methods if any?
    Best Regards,
    Suresh S

    try WDPermanentMessage

  • XML creation/parsing/validation & performance

    Hello friends. I'm faced with a situation in which I need to create and parse many relatively small xml documents. Most of the XML I will be exchanging is very simple, such as:
    <? xml version="1.0"?>
    <transactionId>12345</transactionId>
    <accountId>98765</accountId>
    <address>123 Main Street</address>
    I have done enough research and prototyping to create and parse these simple documents using the DOM model. I have really started to wonder if I'd be better of simply hardcoding the creation and parsing of these documents. I'm getting some of these documents as a String over HTTP and then creating a DOM which I then can parse and am just not sure if what I am gaining in good programming practice is worth incurring the overhead of the DOM creation and parsing process.
    I'm even more divided on the issue of using DTDs or XML Schemas to validate the structure of these tiny documents. I will be having to exchange a lot of these tiny XML messages and am comfortable with the fact that the parties I am trading XML with are not going to be changing things up on the fly.
    On the other hand, I am a beginner in the XML world and am sure there could be a hundred things I haven't thought off.
    I am grateful for any assistance.
    CM

    Also,
    I assume that the values in you XML document are values from Java Objects, so why not have an interface defining a "toXml()" method for application classes to implement. All the classes would need to do is to know how to write its properties (variables) to an XML document. Again JDOM can handle his very easily... see below code....
    /** method to return the entity properties as an XML string.
    * @return the objects properties in an XML document String.
    public String toXml() {
    XMLOutputter out = null;
    String xml = null;
    Document document = new Document(new Element( "gallery-xml" ));
    Element root = document.getRootElement();
    Element content = new Element("gallery");
    content.addContent(new Comment("Details of Gallery as an XML document"));
    * Add attributes to xml document
    DateFormat format = DateFormat.getDateInstance();
    content.addContent(new Element( "image-id" ).addContent( getUUID() ));
    content.addContent(new Element( "name" ).addContent( this.getImageName() ));
    content.addContent(new Element( "student-name" ).addContent( this.getStudentName() ));
    //content.addContent(new Element( "image-bytes" ).addContent( new String(this.getImageBytes()) ));
    content.addContent(new Element( "image-file" ).addContent( this.getImageFile() ));
    content.addContent(new Element( "date-created" ).addContent( format.format( this.getDateCreated() )));
    content.addContent(new Element( "description" ).addContent( this.getDescription() ));
    content.addContent(new Element( "subset" ).addContent( this.getSubset() ));
    content.addContent(new Element( "year" ).addContent( this.getYear() ));
    content.addContent(new Element( "course-id" ).addContent( this.getCourseId() ));
    root.addContent( content );
    //write to a string
    out = new XMLOutputter(" ", true);
    xml = out.outputString(document);
    if (Logger.DEBUG) System.out.println( out.outputString( document ) );
    return xml;

  • Change character set

    Hi
    is anyone can tell me how to change characterset.
    i try with alter session but it doesnt work.
    thanks

    Article from Metalink
    Doc ID:      Note:66320.1
    Subject:      Changing the Database Character Set or the Database National Character Set
    Type:      BULLETIN
    Status:      PUBLISHED
         Content Type:      TEXT/PLAIN
    Creation Date:      23-OCT-1998
    Last Revision Date:      12-DEC-2003
    PURPOSE ======= To explain how to change the database character set or national character set of an existing Oracle8(i) or Oracle9i database without having to recreate the database. 1. SCOPE & APPLICATION ====================== The method described here is documented in the Oracle 8.1.x and Oracle9i documentation. It is not documented but it can be used in version 8.0.x. It does not work in Oracle7. The database character set is the character set of CHAR, VARCHAR2, LONG, and CLOB data stored in the database columns, and of SQL and PL/SQL text stored in the Data Dictionary. The national character set is the character set of NCHAR, NVARCHAR2, and NCLOB data. In certain database configurations the CLOB and NCLOB data are stored in the fixed-width Unicode encoding UCS-2. If you are using CLOB or NCLOB please make sure you read section "4. HANDLING CLOB AND NCLOB COLUMNS" below in this document. Before changing the character set of a database make sure you understand how Oracle deals with character sets. Before proceeding please refer to [NOTE:158577.1] "NLS_LANG Explained (How Does Client-Server Character Conversion Work?)". See also [NOTE:225912.1] "Changing the Database Character Set - an Overview" for general discussion about various methods of migration to a different database character set. If you are migrating an Oracle Applications instance, read [NOTE:124721.1] "Migrating an Applications Installation to a New Character Set" for specific steps that have to be performed. If you are migrating from 8.x to 9.x please have a look at [NOTE:140014.1] "ALERT: Oracle8/8i to Oracle9i Using New "AL16UTF16"" and other referenced notes below. Before using the method described in this note it is essential to do a full backup of the database and to use the Character Set Scanner utility to check your data. See the section "2. USING THE CHARACTER SET SCANNER" below. Note that changing the database or the national character set as described in this document does not change the actual character codes, it only changes the character set declaration. If you want to convert the contents of the database (character codes) from one character set to another you must use the Oracle Export and Import utilities. This is needed, for example, if the source character set is not a binary subset of the target character set, i.e. if a character exists in the source and in the target character set but not with the same binary code. All binary subset-superset relationships between characters sets recognized by the Oracle Server are listed in [NOTE:119164.1] "Changing Database Character Set - Valid Superset Definitions". Note: The varying width character sets (like UTF8) are not supported as national character sets in Oracle8(i) (see [NOTE:62107.1]). Thus, changing the national character set from a fixed width character set to a varying width character set is not supported in Oracle8(i). NCHAR types in Oracle8 and Oracle8i were designed to support special Oracle specific fixed-width Asian character sets, that were introduced to provide higher performance processing of Asian character data. Examples of these character sets are : JA16EUCFIXED ,JA16SJISFIXED , ZHT32EUCFIXED. For a definition of varying width character sets see also section "4. HANDLING CLOB AND NCLOB COLUMNS" below. WARNING: Do not use any undocumented Oracle7 method to change the database character set of an Oracle8(i) or Oracle9i database. This will corrupt the database. 2. USING THE CHARACTER SET SCANNER ================================== Character data in the Oracle 8.1.6 and later database versions can be efficiently checked for possible character set migration problems with help of the Character Set Scanner utility. This utility is included in the Oracle Server 8.1.7 software distribution and the newest Character Set Scanner version can be downloaded from the Oracle Technology Network site, http://otn.oracle.com The Character Set Scanner on OTN is available for limited number of platforms only but it can be used with databases on other platforms in the client/server configuration -- as long as the database version matches the Character Set Scanner version and platforms are either both ASCII-based or both EBCDIC-based. It is recommended to use the newest Character Set Scanner version available from the OTN site. The Character Set Scanner is documented in the following manuals: - "Oracle8i Documentation Addendum, Release 3 (8.1.7)", Chapter 3 - "Oracle9i Globalization Support Guide, Release 1 (9.0.1)", Chapter 10 - "Oracle9i Database Globalization Support Guide, Release 2 (9.2)", Chapter 11 Note: The Character Set Scanner coming with Oracle 8.1.7 and Oracle 9.0.1 does not have a separate version number. It reports the database release number in its banner. This version of the Scanner does not check for illegal character codes in a database if the FROMCHAR and TOCHAR (or FROMNCHAR and TONCHAR) parameters have the same value (i.e. you simulate migration from a character set to itself). The Character Set Scanner 1.0, available on OTN, reports its version number as x.x.x.1.0, where x.x.x is the database version number. This version adds a few bug fixes and it supports FROMCHAR=TOCHAR provided it is not UTF8. The Character Set Scanner 1.1, available on OTN and with Release 2 (9.2) of the Oracle Server, reports its version number as v1.1 followed by the database version number. This version adds another bug fixes and the full support for FROMCHAR=TOCHAR. None of the above versions of the Scanner can correctly analyze CLOB or NCLOB values if the database or the national character set, respectively, is multibyte. The Scanner reports such values randomly as Convertible or Lossy. The version 1.2 of the Scanner will mark all such values as Changeless (as they are always stored in the Unicode UCS-2 encoding and thus they do not change when the database or national character set is changed from one multibyte to another). Character Set Scanner 2.0 will correctly check CLOBs and NCLOBs for possible data loss when migrating from a multibyte character set to its subset. To verify that your database contains only valid codes, specify the new database character set in the TOCHAR parameter and/or the new national character set in the TONCHAR parameter. Specify FULL=Y to scan the whole database. Set ARRAY and PROCESS parameters depending on your system's resources to speed up the scanning. FROMCHAR and FROMNCHAR will default to the original database and national character sets. The Character Set Scanner should report only Changless data in both the Data Dictionary and in application data. If any Convertible or Exceptional data are reported, the ALTER DATABASE [NATIONAL] CHARACTER SET statement must not be used without further investigation of the source and type of these data. In situations in which the ALTER DATABASE [NATIONAL] CHARACTER SET statement is used to repair an incorrect database character set declaration rather than to simply migrate to a new wider character set, you may be advised by Oracle Support Services analysts to execute the statement even if Exceptional data are reported. For more information see also [NOTE:225912.1] "Changing the Database Character Set - a short Overview". 3. CHANGING THE DATABASE OR THE NATIONAL CHARACTER SET ====================================================== Oracle8(i) introduces a new documented method of changing the database and national character sets. The method uses two SQL statements, which are described in the Oracle8i National Language Support Guide: ALTER DATABASE [<db_name>] CHARACTER SET <new_character_set> ALTER DATABASE [<db_name>] NATIONAL CHARACTER SET <new_NCHAR_character_set> The database name is optional. The character set name should be specified without quotes, for example: ALTER DATABASE CHARACTER SET WE8ISO8859P1 To change the database character set perform the following steps. Note that some of them have been erroneously omitted from the Oracle8i documentation: 1. Use the Character Set Scanner utility to verify that your database contains only valid character codes -- see "2. USING THE CHARACTER SET SCANNER" above. 2. If necessary, prepare CLOB columns for the character set change -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. Omitting this step can lead to corrupted CLOB/NCLOB values in the database. If SYS.METASTYLESHEET (STYLESHEET) is populated (9i and up only) then see [NOTE:213015.1] "SYS.METASTYLESHEET marked as having convertible data (ORA-12716 when trying to convert character set)" for the actions that need to be taken. 3. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all. 4. Execute the following commands in Server Manager (Oracle8) or sqlplus (Oracle9), connected as INTERNAL or "/ AS SYSDBA": SHUTDOWN IMMEDIATE; -- or NORMAL <do a full database backup> STARTUP MOUNT; ALTER SYSTEM ENABLE RESTRICTED SESSION; ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; ALTER SYSTEM SET AQ_TM_PROCESSES=0; ALTER DATABASE OPEN; ALTER DATABASE CHARACTER SET <new_character_set>; SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP RESTRICT; 5. Restore the parallel_server parameter in INIT.ORA, if necessary. 6. Execute the following commands: SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP; The double restart is necessary in Oracle8(i) because of a SGA initialization bug, fixed in Oracle9i. 7. If necessary, restore CLOB columns -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET. You can issue both statements together if you wish. Error Conditions ---------------- A number of error conditions may be reported when trying to change the database or national character set. In Oracle8(i) the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: ORA-01679: database must be mounted EXCLUSIVE and not open to activate - if you do not enable restricted session - if you startup the instance in PARALLEL/SHARED mode - if you do not set the number of queue processes to 0 - if you do not set the number of AQ time manager processes to 0 - if anybody is logged in apart from you. This error message is misleading. The command requires the database to be open but only one session, the one executing the command, is allowed. For the above error conditions Oracle9i will report one of the errors: ORA-12719: operation requires database is in RESTRICTED mode ORA-12720: operation requires database is in EXCLUSIVE mode ORA-12721: operation cannot execute when other sessions are active Oracle9i can also report: ORA-12718: operation requires connection as SYS if you are not connect as SYS (INTERNAL, "/ AS SYSDBA"). If the specified new character set name is not recognized, Oracle will report one of the errors: ORA-24329: invalid character set identifier ORA-12714: invalid national character set specified ORA-12715: invalid character set specified The ALTER DATABASE [NATIONAL] CHARACTER SET command will only work if the old character set is considered a binary subset of the new character set. Oracle Server 8.0.3 to 8.1.5 recognizes US7ASCII as the binary subset of all ASCII-based character sets. It also treats each character set as a binary subset of itself. No other combinations are recognized. Newer Oracle Server versions recognize additional subset/superset combinations, which are listed in [NOTE:119164.1]. If the old character set is not recognized as a binary subset of the new character set, the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: - in Oracle 8.1.5 and above: ORA-12712: new character set must be a superset of old character set - in Oracle 8.0.5 and 8.0.6: ORA-12710: new character set must be a superset of old character set - in Oracle 8.0.3 and 8.0.4: ORA-24329: invalid character set identifier You will also get these errors if you try to change the characterset of a US7ASCII database that was started without a (correct) ORA_NLSxx parameter. See [NOTE:77442.1] It may be necessary to switch off the superset check to allow changes between formally incompatible character sets to solve certain character set problems or to speed up migration of huge databases. Oracle Support Services may pass the necessary information to customers after verifying the safety of the change for the customers' environments. If in Oracle9i an ALTER DATABASE NATIONAL CHARACTER SET is issued and there are N-type colums who contain data then this error is returned: ORA-12717:Cannot ALTER DATABASE NATIONAL CHARACTER SET when NCLOB data exists The error only speaks about Nclob but Nchar and Nvarchar2 are also checked see [NOTE:2310895.9] for bug [BUG:2310895] 4. HANDLING CLOB AND NCLOB COLUMNS ================================== Background ---------- In a fixed width character set codes of all characters have the same number of bytes. Fixed width character sets are: all single-byte character sets and those multibyte character sets which have names ending with 'FIXED'. In Oracle9i the character set AL16UTF16 is also fixed width. In a varying width character set codes of different characters may have different number of bytes. All multibyte character sets except those with names ending with FIXED (and except Oracle9i AL16UTF16 character set) are varying width. Single-byte character sets are character sets with names of the form xxx7yyyyyy and xxx8yyyyyy. Each character code of a single-byte character set occupies exactly one byte. Multibyte character sets are all other character sets (including UTF8). Some -- usually most -- character codes of a multibyte character set occupy more than one byte. CLOB values in a database whose database character set is fixed width are stored in this character set. CLOB values in an Oracle 8.0.x database whose database character set is varying width are not allowed. They have to be NULL. CLOB values in an Oracle >= 8.1.5 database whose database character set is varying width are stored in the fixed width Unicode UCS-2 encoding. The same holds for NCLOB values and the national character set. The UCS-2 storage format of character LOB values, as implemented in Oracle8i, ensures that calculation of character positions in LOB values is fast. Finding the byte offset of a character stored in a varying width character set would require reading the whole LOB value up to this character (possibly 4GB). In the fixed width character sets the byte offsets are simply character offsets multiplied by the number of bytes in a character code. In UCS-2 byte offsets are simply twice the character offsets. As the Unicode character set contains all characters defined in any other Oracle character set, there is no data loss when a CLOB/NCLOB value is converted to UCS-2 from the character set in which it was provided by a client program (usually the NLS_LANG character set). CLOB Values and the Database Character Set Change ------------------------------------------------- In Oracle 8.0.x CLOB values are invalid in varying width character sets. Thus you must delete all CLOB column values before changing the database character set to a varying width character set. In Oracle 8.1.5 and later CLOB values are valid in varying width character sets but they are converted to Unicode UCS-2 before being stored. But UCS-2 encoding is not a binary superset of any other Oracle character set. Even codes of the basic ASCII characters are different, e.g. single-byte code for "A"=0x41 becomes two-byte code 0x0041. This implies that even if the new varying width character set is a binary superset of the old fixed width character set and thus VARCHAR2/LONG character codes remain valid, the fixed width character codes in CLOB values will not longer be valid in UCS-2. As mentioned above, the ALTER DATABASE [NATIONAL] CHARACTER SET statement does not change character codes. Thus, before changing a fixed width database character set to a varying width character set (like UTF8) in Oracle 8.1.5 or later, you first have to export all tables containing non-NULL CLOB columns, then truncate these tables, then change the database character set and, finally, import the tables back to the database. The import step will perform the required conversion. If you omit the steps above, the character set change will succeed in Oracle8(i) (Oracle9i disallows the change in such situation) and the CLOBs may appear to be correctly legible but as their encoding is incorrect, they will cause problems in further operations. For example, CREATE TABLE AS SELECT will not correctly copy such CLOB columns. Also, after installation of the 8.1.7.3 server patchset the CLOB columns will not longer be legible. LONG columns are always stored in the database character set and thus they behave like CHAR/VARCHAR2 in respect to the character set change. BLOBs and BFILEs are binary raw datatypes and their processing does not depend on any Oracle character set setting. NCLOB Values and the National Character Set Change -------------------------------------------------- The above discussion about changing the database character set and exporting and importing CLOB values is theoretically applicable to the change of the national character set and to NCLOB values. But as varying width character sets are not supported as national character sets in Oracle8(i), changing the national character set from a fixed width character set to a varying width character set is not supported at all. Preparing CLOB Columns for the Character Set Change --------------------------------------------------- Take a backup of the database. If using Advanced Replication or deferred transactions functionality, make sure that there are no outstanding deferred transactions with CLOB parameters, i.e. DEFLOB view must have no rows with non-NULL CLOB_COL column; to make sure that replication environment remains consistent use only recommended methods of purging deferred transaction queue, preferably quiescing the replication environment. Then: - If changing the database character set from a fixed width character set to a varying with character set in Oracle 8.0.x, set all CLOB column values to NULL -- you are not allowed to use CLOB columns after the character set change. - If changing the database character set from a fixed width character set to a varying width character set in Oracle 8.1.5 or later, perform table-level export of all tables containing CLOB columns, including SYSTEM's tables. Set NLS_LANG to the old database character set for the Export utility. Then truncate these tables. Restoring CLOB Columns after the Character Set Change ----------------------------------------------------- In Oracle 8.1.5 or later, after changing the character set as described above (steps 3. to 6.), restore CLOB columns exported in step 2. by importing them back into the database. Set NLS_LANG to the old database character set for the Import utility to avoid IMP-16 errors and data loss. RELATED DOCUMENTS: ================== [NOTE:13856.1] V7: Changing the Database Character Set -- This note has limited distribution, please contact Oracle Support [NOTE:62107.1] The National Character Set in Oracle8 [NOTE:119164.1] Changing Database Character set - Valid Superset definitions [NOTE:118242.1] ALERT: Changing the Database or National Character Set Can Corrupt LOB Values <Note.158577.1> NLS_LANG Explained (How Does Client-Server Character Conversion Work?) [NOTE:140014.1] ALERT: Oracle8/8i to Oracle9i using New "AL16UTF16" [NOTE:159657.1] Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i (incl. 9.2) [NOTE:124721.1] Migrating an Applications Installation to a New Character Set Oracle8i National Language Support Guide Oracle8i Release 3 (8.1.7) Readme - Section 18.12 "Restricted ALTER DATABASE CHARACTER SET Command Support (CLOB and NCLOB)" Oracle8i Documentation Addendum, Release 3 (8.1.7) - Chapter 3 "New Character Set Scanner Utility" Oracle8i Application Developer's Guide - Large Objects (LOBs), Release 2 - Chapter 2 "Basic Components" Oracle8 Application Developer's Guide, Release 8.0 - Chapter 6 "Large Objects (LOBs)", Section "Introduction to LOBs" Oracle9i Globalization Guide, Release 1 (9.0.1) Oracle9i Database Globalization Guide, Release 2 (9.2) For further NLS / Globalization information you may start here: [NOTE:150091.1] Globalization Technology (NLS) Library index .
         Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.     
    Joel P�rez

  • JAXP API provides no way to enable XML Schema validation mode

    I am evaluating XDK 9.2.0.5.0 for Java and have encountered another
    problem with XML Schema support when using the JAXP API.
    Using the classes in oracle.xml.jaxp.*, it appears to be impossible to
    enable XML Schema validation mode. I can set the schema object, but
    the parser does not use it for validation.
    If I use DOMParser directly, I can call the setValidationMode(int)
    method to turn on schema validation mode. There is no equivalient
    means to enable XML Schema validation when using the JAXP API.
    Test case for JAXP API:
    === begin OracleJAXPSchemaTest.java ======
    package dfranklin;
    import java.io.*;
    import javax.xml.parsers.*;
    import javax.xml.transform.*;
    import javax.xml.transform.dom.*;
    import javax.xml.transform.stream.*;
    import oracle.xml.parser.schema.XMLSchema;
    import oracle.xml.parser.schema.XSDBuilder;
    import org.w3c.dom.*;
    import org.xml.sax.*;
    public class OracleJAXPSchemaTest
    private final static String xsd = ""
    +"<xsd:schema xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" targetNamespace=\"urn:dfranklin:test\">"
    +" <xsd:element name=\"amount\" type=\"xsd:integer\"/>"
    +"</xsd:schema>";
    private final static String xml =
    "<amount xmlns=\"urn:dfranklin:test\">1</amount>";
    public static void main(String[] args)
    throws Exception
    System.setProperty("javax.xml.parsers.DocumentBuilderFactory",
              "oracle.xml.jaxp.JXDocumentBuilderFactory");
    new OracleJAXPSchemaTest().run();
    public void run()
    throws Exception
    DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
    dbf.setValidating(true);
    dbf.setNamespaceAware(true);
    XMLSchema xmlSchema = buildXMLSchema(new StringReader(xsd));
    dbf.setAttribute("oracle.xml.parser.XMLParser.SchemaObject",
              xmlSchema);
    DocumentBuilder docbldr = dbf.newDocumentBuilder();
    Document doc = docbldr.parse(new InputSource(new StringReader(xml)));
    print(doc);
    private XMLSchema buildXMLSchema(Reader xsdin)
    throws Exception
    XSDBuilder xsdBuilder = new XSDBuilder();
    XMLSchema xmlSchema = (XMLSchema)xsdBuilder.build(xsdin, null);
    return xmlSchema;
    private void print(Document doc)
    throws
    javax.xml.transform.TransformerConfigurationException,
    javax.xml.transform.TransformerException
    Transformer xformer = TransformerFactory.newInstance().newTransformer();
    xformer.transform(new DOMSource(doc), new StreamResult(System.out));
    System.out.println();
    === end OracleJAXPSchemaTest.java ======
    Running that program:
    java dfranklin.OracleJAXPSchemaTest
    produces this output
    Exception in thread "main" oracle.xml.parser.v2.XMLParseException: Element 'amount' used but not declared.
    at oracle.xml.parser.v2.XMLError.flushErrors(XMLError.java:148)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:269)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:149)
    at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:96)
    at dfranklin.OracleJAXPSchemaTest.run(OracleJAXPSchemaTest.java:40)
    at dfranklin.OracleJAXPSchemaTest.main(OracleJAXPSchemaTest.java:27)
    This shows that the parser is not using the XML Schema to validate the
    document. I think it's expecting to find a DTD.
    Here is the equivalent program using DOMParser directly, and setting
    validation mode to SCHEMA_VALIDATION.
    === begin OracleParserTest.java ====
    package dfranklin;
    import java.io.*;
    import javax.xml.transform.*;
    import javax.xml.transform.dom.*;
    import javax.xml.transform.stream.*;
    import oracle.xml.parser.schema.XMLSchema;
    import oracle.xml.parser.schema.XSDBuilder;
    import oracle.xml.parser.v2.DOMParser;
    import oracle.xml.parser.v2.XMLParser;
    import org.w3c.dom.*;
    import org.xml.sax.*;
    public class OracleParserTest
    private final static String xsd = ""
    +"<xsd:schema xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" targetNamespace=\"urn:dfranklin:test\">"
    +" <xsd:element name=\"amount\" type=\"xsd:integer\"/>"
    +"</xsd:schema>";
    private final static String xml =
    "<amount xmlns=\"urn:dfranklin:test\">1</amount>";
    public static void main(String[] args)
    throws Exception
    new OracleParserTest().run();
    public void run()
    throws Exception
    DOMParser dp = new DOMParser();
    XMLSchema xmlSchema = buildXMLSchema(new StringReader(xsd));
    dp.setXMLSchema(xmlSchema);
    dp.setValidationMode(XMLParser.SCHEMA_VALIDATION);
    dp.parse(new InputSource(new StringReader(xml)));
    Document doc = dp.getDocument();
    print(doc);
    private XMLSchema buildXMLSchema(Reader xsdin)
    throws Exception
    XSDBuilder xsdBuilder = new XSDBuilder();
    XMLSchema xmlSchema = (XMLSchema)xsdBuilder.build(xsdin, null);
    return xmlSchema;
    private void print(Document doc)
    throws
    javax.xml.transform.TransformerConfigurationException,
    javax.xml.transform.TransformerException
    Transformer xformer = TransformerFactory.newInstance().newTransformer();
    xformer.transform(new DOMSource(doc), new StreamResult(System.out));
    System.out.println();
    === end OracleParserTest.java ====
    Running that program
    java dfranklin.OracleParserTest
    produces this output
    <?xml version = '1.0'?>
    <amount xmlns="urn:dfranklin:test">1</amount>
    In this case, the parser has validated the document.
    I also tried this with version 10.1.0.0.0 beta, and got the same
    error.
    Darin Franklin

    Thanks for posting that. I tried that code and still got my problem. The big difference is that I'm reading a group of XML files each of which uses a large number of schema files. There are over 50 schema files organized so they can be included as needed. Each of the XML files has a noNamespaceSchemaLocation attribute specifying one of the ten top schma files, that includes many others.
    However, I did solve the problem. I added one line --
           saxb.setFeature( "http://apache.org/xml/features/validation/schema",
                    true);
            // next line is new
            saxb.setProperty( JAXPConstants.JAXP_SCHEMA_LANGUAGE,
                    JAXPConstants.W3C_XML_SCHEMA );after the setFeature to turn on schema validation.
    (It needs an import org.apache.xerces.jaxp.JAXPConstants; statement.)
    Note:With the feature set to true and the setProperty commented out, the EntityResolver is called at construction tiem, and for each schema file, but each element is flagged as needing to be declared.
    With the feature commented out, and the setProperty in place, the EntityResolver is not called, and there is no validation performed. (I added invalid content to one of the files and there were no errors listed.)
    With both the setFeature and the setProperty in place, however, I get the calls from the EntityResolver indicating that it is processing the schema files, and for the good files, I get no errror, but for the bad file, I get an error indicating a bad validation.
    Now I can make the EntityResolver quiet and get what I wanted. Maybe there are other ways to do this, but I've got one that works. :-)
    Thanks to all who have helped.
    Dave Patterson

  • Commons validator validwhen rules

    I have a form, something like this:
    <html:radio property="type" value="1" /><html:text property="field1"/>
    <html:radio property="type" value="2" /><html:text property="field2"/>
    <html:radio property="type" value="3" /><html:text property="field3"/>
    It need to apply some validation rules on the fields: field1, field2, field3 only if set
    proper html:radio ... So if we choose type=1 then filed1 must be validate on required and
    mask rules, but field2 and field3 should stay untouchable(without validation).
    How can I configure it in validation.xml ? Is it possible ? I think validwhen rules must
    be applied.
    My variant below (but it's not working as I expect):
    <field property="field1" depends="validwhen, integer, intRange">
    <arg0 key="test.field1"/>
    <arg1 name="intRange" key="${var:min}" resource="false"/>
    <arg2 name="intRange" key="${var:max}" resource="false"/>
    <var>
    <var-name>test</var-name>
    <var-value>((type!=1) or (*this* != null))</var-value>
    </var>
    <var>
    <var-name>min</var-name>
    <var-value>1</var-value>
    </var>
    <var>
    <var-name>max</var-name>
    <var-value>99</var-value>
    </var>
    </field>
    Validation performing in anyway, independently what type selected(type==1 or
    type==2 or type==3)...

    Javascript validation? Yup, looks like your code is calling
    a Javascript method all right. But you didn't post the
    code that contains that method.Just to clarify a bit on this, as it is rather struts specific.
    The validator framework lets you define your validations in the validator.xml file, and does the validation server side.
    Struts also has a tag that will generate a javascript function to do the validation client side as well. : <html:javascript formName="loginForm" />
    If you view source on the generated page, you would get the generated function.
    The code posted looks valid as far as that goes.
    My guess: The i18n lookup is failing and returning null.
    Are loginForm.email and loginForm.password defined in the resource bundle props.ApplicationResources?
    To double check this try changing the struts-config messageResources tag:
    <message-resources parameter="props.ApplicationResources" null="false"/> so that instead of null (probably being converted to empty string) you should see a message like ???loginForm.email??? is required.
    Hope this helps,
    evnafets

  • Bank Data Validations in SAP CRM

    What are all the Bank Data Validations performed while adding Bank Data to a customer's Payment Details.

    It can be done, but you would have to validate the fields in the schema (which is limited to data type - string, integer etc.) or create a user designed function for all fields.

  • Validation at run time

    Hi,
      i am trying to use the keyword "order_time" in for/next.
      it says on the admin guide on page 111 that, "you can use the passed members of a dimension in a for/next loop, when the validation is performed at run time".
      my question is:
    1) what is validation performed at run time?
    2) how does one perform validation at run time?
      thanks for your help in advance
    cheers

    Are you handling it in valueChangeListener? Have you set the properties
    AutoSubmit to true
    Immediate to true
    for the drop down? On top of that as Timo has mentioned you need to set the partialTrigger property of the inputText to the dropdown.
    Regards,
    ~K

  • System Validation task and orc status X

    Hi All,
    we noticed on some accounts where a resource status is Enabled but the orc status is X. even when we try to force them to C the X always seems to come back somehow. the only thing we can find is that for all of these at one time the System Validation task was manually completed.
    of course this causes some problems with code that is dependent on the orc status. is there a way to fix these resources?
    Thanx in advance.
    Fred

    Hi,
    AD connector version is 9.1.1.0 and "System Validation" is the first default task for the AD user Resource Object. I am not sure, what will it perform. Once its complets, then process form will be generate by executing all releated adaapters and then will be create AD account for that user. How ever, the same user has been created in other Environment with same detials(So no data releated issue).
    FYIP. "Auto Save Form" option for the Process Defination is checked.(I got this as solution from some other thread).
    Can you please suggest me, what "System Validation" performs, and why its pending status.Any usefull links please?
    Thanks.
    Edited by: user13285646 on Jul 8, 2011 8:07 AM

  • Important Validations for MTL Material Transaction

    Hi All,
    I am creating an OAF page for material transaction -- for material issue and receipt.
    For this I am inserting a record into MTL transaction interface table and the running the transaction manager API in oracle, which populate mtl_material_transactions table.
    But after general insertion i found that the record is getting error out due to oracle validation. The item may be lot or serial controlled.
    Please let me know the validations performed by this program, so that I can handle these before inserting into interface tables.
    Any help/document will be appreciated.
    You can mail me to [email protected] as well.
    Regards
    Riyas

    Riyas,
    You have to insert the serial number if it is a serial controlled item or it will fail always. Before inserting into the interface table you can validate whether it is serial no is exists or not from the wsh_serial_numbers or mtl_serial_numbers.
    You can compare the values from the inventory_itme_id to get the serial numbers.
    You can use the below query to validate the records. It might be helpful.
    1stà select * from oe_order_headers_all where header_id=4838351
    2ndà select * from wsh_delivery_details where source_header_id=4839902
    3rdà select * from wsh_serial_numbers where delivery_detail_id=5088694
    INSERT INTO mtl_serial_numbers_interface
    (source_code, source_line_id,
    transaction_interface_id,
    last_update_date, last_updated_by,
    creation_date, created_by,
    last_update_login, fm_serial_number,
    to_serial_number, product_code
    --product_transaction_id
    VALUES ('Miscellaneous issue', 7730351,
    71737725,
    --mtl_material_transactions_s.NEXTVAL, --transaction_interface_id
    SYSDATE, --LAST_UPDATE_DATE
    fnd_global.user_id, --LAST_UPDATED_BY
    SYSDATE, --CREATION_DATE
    fnd_global.user_id, --CREATED_BY
    fnd_global.login_id, --LAST_UPDATE_LOGIN
    '168-154-701',
    --FM_SERIAL_NUMBER
    '168-154-701', --TO_SERIAL_NUMBER
    'RCV'
    --PRODUCT_CODE
    --l_rcv_transactions_interface_s
    --v_txn_interface_id --product_transaction_id

Maybe you are looking for