Database support for unicode

Hello, I am in the process uf upgrading database installation scripts so they will support Unicode. I just want to clarify that by changing the Character set to say for example AL32UTF8 and the National Character Set to UTF8 the database will then be able to support Unicode. Do I need to also change all the VARCHAR2 and CHAR2 data types to NVARCHAR2 and CHAR2? When changing the character sets do the database then default to bytes instead of characters for multibyte character storage? Thank you.
-- David

You would not want a situation where some clients have a database character set of AL32UTF8 and are storing the data in CHAR/ VARCHAR2 columns and some clients have a non-Unicode database character set, a Unicode national character set, and store their Unicode data in NCHAR/ NVARCHAR2 columns (I'm assuming from the context that you are some sort of application vendor here so that different clients are trying to run the same application). That would massively increase the complexity of your application code and make testing & supporting the application substantially more difficult.
If at all possible, it is preferable to change the database character set to Unicode for existing databases. This may involve exporting & importing some or all of the data or it may be possible online (there is a chapter in the Globalization Support document that covers character set migration and the various options you have).
Storing data in NCHAR/ NVARCHAR2 columns should generally be a last resort (unless you really know what you are doing and want to leverage different Unicode encodings). You are likely to cause yourself all sorts of headaches trying to support national character set data types.
Justin

Similar Messages

  • Call upon even better support for Unicode

    Hello
    Following some messages I have posted regarding problems I encountered while developing a non-English web application, I would like to call upon an even better support for Unicode. Before I describe my call, I want to say that I consider Berkeley DBXML a superb product. Superb. It lets us develop very clean and maintainable applications. Maintainability is, in my view, the keyword in good software development practices.
    In this message I would like to remind you that the US-ASCII 8-bit set of characters only represents 0.4% of all characters in the world. It is also true to say that most of our software comes from efforts of American developers, for which I am of course very grateful.
    But problems with non US-ASCII characters are very very time consuming to solve. To start with, our operating systems need to be configured especially for unicode, our servers too, our development tools too, our source code too and, finally, our data too. That's a lot of configuring, isn't it? Believe me, as a Flemish french-speaking, danish-speaking developer who develops currently a new application in Portuguese, I know what I am talking about.
    Have you ever tried to write a Java class called Ação.java, that loads an xml instance called Ação.xml that contains something like <?xml version="1.0" charset="utf-8"?></ação variável="descrição"/>? It takes at least the double of time to have all this work right in a web application on a Linux server than it would take to write a Acao.java that loads Acao.xml containing <?xml version="1.0" charset="us-ascii"?></acao variavel="descricao"/> (which is clearly something we do not want in Portugal).
    I have experienced a problem while using the dbxml shell to load documents that have utf-8 encoded names. See difficulties retrieving documents with non ascii characters in name The work around is not to use the dbxml shell, with which I am of course not very happy.
    So, while trying not to be arrogant and while trying to express my very very great appreciation for this great product, I call upon even better support for Unicode. After all, when the rest of us, that use another 65279 characters in our software, will be able to use this great product without problems, will it not contribute to the success of Berkeley DBXML?
    Thank you
    Koen
    Edited by: koenheene on 29/Out/2009 3:09

    Hello John and thank you for replying,
    You are completely correct that it is a shell problem. I investigated and found solutions for running dbxml in a Linux shell. On Windows, as one could expect, no solution so far.
    Here is an overview of my investigation, which I hope will be useful for other developers which also presist writing code and xml in their language.
    difficulties retrieving documents with non ascii characters in name
    I was wondering though if it would not be possible to write the dbxml shell in such a way that it becomes independent from the encoding of the shell. Surely their must be, not? Rewrite dbxml in Java? Any candidates :-) ?
    Thanks again for the very good work,
    Koen

  • Database support for partition

    Hi all,
    what are the databases that support the partition. mine is MSSQL. will it not support for partition? Is that the reason the partition option is diabled.
    regards
    kiran

    Hi Kiran,
    Yes.SAP does not support MS-SQL as of now. As of now only ORACLE, INFORMIX support partitioning.
    Check this link:
    http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Bye
    Dinesh

  • UDS support for Unicode

    Hi,
    I have a problem with retrieving Japanese characters from the Oracle Database.
    Here is the scenario.
    Environment
    I have an Oracle 8i database that has been configured for AL24UTFFSS character code set in the NLS_DATABASE_PARAMETERS. From UDS, some Japanese characters were inserted into tables of this database with �AMERICAN_AMERICA.JA16SJIS� as NLS_LANG environment variable on the machine that runs the DBSession.
    Positive result from [b]iSQL *Plus
    iSQL*Plus supports Unicode. And the results are as expected. I have a opened a session with NLS_LANG for the session being set to �AMERICAN_AMERICA. AL24UTFFSS�. I was able to retrieve the Japanese characters by a SELECT statement with no problem.
    Negative Results from UDS 5.0.15
    Understanding that starting with release 5.0 SP1, UDS offers full support of the Unicode codeset, I was expecting similar results from UDS client application. The DBSession was established with the NLS_LANG set to �AMERICAN_AMERICA. AL24UTFFSS�. (same as for iSQL*Plus). When I executed a small test application on a client PC with FORTE_LOCALE being set for �ja_jp.UTF8�, I only could see junk characters on the screen. I have tried even by setting �ja_jp.sjs� as FORTE_LOCALE. I am getting junk characters in all occasions unless I set �AMERICAN_AMERICA.JA16SJIS� as NLS_LANG environment variable on the machine that runs the DBSession, which I don�t want as I want Korean, Chinese and additional characters sets also handled properly leveraging the fact that UDS now Forte supports Unicode.
    We have multiple advantages if we can set a single NLS_LANG to support multiple languages. Apparently, there seems to be no problem with Oracle as I could retireve the charactes from iSQL*Plus.
    I appreciate your help in this.
    Thank you
    GS

    Hi,
    In the above description I told that we are using UDS 5.0.15.
    Forte says that starting with release 5.0 SP1, the UDS offers full support of the Unicode codeset .
    http://sunsolve.sun.com/pub-cgi/retrieve.pl?doc=finfodoc%2F67723
    Does it mean that I have the version prior the version that supports Unicode?
    Regards
    GS

  • Disappointing lack of support for Unicode

    I was very disappointed to find that Pages 2 still cannot properly support Unicode. TextEdit does a vastly better job of supporting Unicode. I can paste Unicode text into Pages 2 that I have already edited in TextEdit, but it cannot be properly typed in or edited in Pages 2 with the full range of Unicode. Some Unicode is okay, but not all of it. Absolutely can't be done! When are they going to get it right?

    does Pages 2 do
    the double-overstrike formatting that we discussed in
    re Navajo some times back? Pages 1, unlike TextEdit,
    wouldn't let you click an ogonek or acute accent onto
    a vowel with the other. (I was looking to make Navajo
    a- or o-with-acute-accent-and-ogonek.)
    I think so. I believe even Pages 1 was able to do that right after some OS update, but I can't remember now. Anyway I just tested Pages 2 (10.3.9) and made the a and o with the two accents using OptionShiftm for combining ogonek and OptionShifte for combining acute (US Extended Layout). Lucida Grande Font.

  • Uninstall Database manually for Unicode Conversion

    Hi Guys,
    While performing a Unicode Conversion, during Export, Unistall and Import of Database, I would like to improve times for Database Creation since its 5TB
    My goal is to delete SAP/DB Instance Non-Unicode but keeping Tablespaces and Datafiles Layout, in order to not recreate them when re-installing SAP/DB Instance Unicode
    According to SAP Note 1260050 - UNIX:Deleting Oracle DB Instance Based on NW7.1 and Higher, the steps are the following.
    drop user SAP<SCHEMA_ID> cascade;
    drop tablespace <TABLESPACE_NAME> including contents;
    Are this steps valid to delete a ABAP Database, and keeping Datafiles structures for when Installing Unicode SAP Instance?
    Thanks!

    The drop command "including contents" will delete the datafiles... thus SAPinst will have to create them again... and it can take while for a 5 Tb DB.
    SAPinst is supposed to recreate the whole DB thus you should entirely delete it
    startup mount exclusive restrict
    drop database;
    One solution would be to keep the datafiles
    => just run the first SQL command : drop user SAP<SCHEMA_ID> cascade;
    and play with sapinst option SAPINST_SET_STEPSTATE to skip the DB creation step

  • Support for Unicode Strings

    Hi
    I was wondering if Berkeley DB supports storing and retrieving Unicode Strings. (I guess it should be supporting it but just wanted to confirm) and if yes what encoding does it use.
    Thanks,
    KarthikR

    Again, I'm not an authority, but I believe so, yes. From what I've read, BDB only knows about "data," which basically means a sequence of bytes. Thus, in C you could define a struct that contains your specific schema for keys and values, but you'd just write the raw data behind the struct to BDB, which doesn't care or know about your particular format.
    So you'd probably have to convert your unicode string to a byte sequence in some way or another. UTF-8 is the first option that comes to mind, but maybe UTF-16 or UTF-32 (if you really don't care about space) would be simpler to implement.
    Hope this helps,
    Daniel

  • System call support for unicodes

    Hi Solaris guru,
    One of my application (C,Solaris2.7) is required to work in multiple languages. This application makes use of system & C library calls. Is it possible for a japanese user to create file names in japanese? if so how will I able to use these names (let's assume unicodes) with standard system calls and library routines which consider file names has char *?
    I have noticed that Solaris provides wchar_t and (wchar.h) wide string library calls (Ex, wprintf, wscanf, wcstrcmp etc). are there any similar w-version of system calls?
    I greatly appreciate your help.
    Cheers
    Ramesh

    I don't know of a Solaris system call to copy files. I do know there is no such C or C++ standard library function.
    It's easy enough to write a file copy routine, however.
    C++ 4.2 is obsolete and no longer supported. It predates the 1998 C++ standard by a few years.
    But using old-style C++, here is a copy-file routine:
    #include <fstream.h>
    int copyfiles(const char* i, const char* o)
    ifstream in(i, ios::in|ios::binary);
    ofstream out(o, ios::out|ios::binary);
    out << in.rdbuf();
    return !(!in && !out);
    You pass it the names of the input and output files. It opens the files in binary mode, copies input to output if possible, and reports status by returning 1 for success and 0 for failure.
    Using standard C++, the routine looks like this:
    #include <fstream>
    bool copyfiles(const char* i, const char* o)
    std::ifstream in(i, std::ios::binary);
    std::ofstream out(o, std::ios::binary);
    out << in.rdbuf();
    return !(!in && !out);

  • Unicode Support for Brio 8

    <p>Does Brio 8 (or any of the subsequent Hyperion versions)provide support for Unicode characters. If no, how do wetackle reporting from databases containing non-English characterslike German, Japanese etc. Thanks..Cheers..</p>

    It's not what you describe, but here is more detail on what I'm doing.
    This an example of the value string I'm storing. It's a simple xml object, converted to a unicode string, encoded in utf-8:
    u'<d cdt="1267569920" eml="[email protected]" nm="\u3059\u3053\u3099\u304f\u597d\u304d\u306a\u4e16\u754c" pwd="2689367b205c16ce32ed4200942b8b8b1e262dfc70d9bc9fbc77c49699a4f1df" sx="M" tx="000000000" zp="07030" />'
    The nm attribute is Japanese text: すごく好きな世界
    So when I add a secondary index on nm, my callback function is an xml parser which returns the value of a given attribute:
    Generically it's this:
    def callbackfn (attribute):
    """Define how the secondary index retrieves the desired attribute from the data value"""
    return lambda primary_key, primary_data: xml_utils.parse_item_attribute(primary_data, attribute)
    And so for this specific attribute ("nm"), my callback function is:
    callbackfn('nm')
    As I said in my original post, if I add this the the db, I get this type error:
    TypeError: DB associate callback should return DB_DONOTINDEX/string/list of strings.
    But when I do not place a secondary index on "nm", the type error does not occur.
    So that's consistent with what Sandra wrote in the other post, i.e.:
    "Berkeley DB never operates on the value part of a record. Values are simply payload, to be stored with keys and reliably delivered back to the application on demand."
    My guess is that I need to add an additional utf-8 encoding or decoding to the callback function, or else define a custom comparsion function so the callback will know what to do with the nm attribute value, but I'm not sure what exactly.

  • Oracle Database Migration Assistant for Unicode (DMU) is now available!

    Oracle Database Migration Assistant for Unicode (DMU) is a next-generation GUI migration tool to help you migrate your databases to the Unicode character set. It is free for customers with database support contracts. The DMU is built on the same GUI platform as SQL Developer and JDeveloper. It uses dedicated RDBMS functionality to scan and convert a database to AL32UTF8 (or to the deprecated UTF8, if needed for some reasons). For existing AL32UTF8 and UTF8 databases, it provides a validation mode to check if data is really encoded in UTF-8. Learn more about the tool on its OTN pages.
    There is a new Database Migration Assistant for Unicode. We encourage you to post all questions related to the tool and to the database character set migration process in general to that forum.
    Thanks,
    The DMU Development Team

    HI there!
    7.6.03 ? Why do you use outdated software for your migration.
    At least use 7.6.06 or 7.7.07 !
    About the performance topic - well, you've to figure out what the database is waiting for.
    Activate time measurement, activate the DBanalyzer with a short snapshot interval (say 120 or 60 seconds) and check what warnings you get.
    Also you should use the parameter check to make sure that you don't run into any setup-induced bottlenecks.
    Apart from these very basic prerequisites for the analysis of this issue, you may want to check
    SAP Note 1464560 FAQ: R3load on MaxDB
    Maybe you can use some of the performance features available in the current R3load versions.
    regards,
    Lars
    p.s.
    open a support message if you're not able to do the performance analysis yourself.

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • Lack of support for FIM database mirroring

    The official line is that database mirroring is not a supported architecture for the FIM deployment. I am not proposing using this, however I'd like to understand 1) What the issues really would be with a mirrored database deployment, 2) Will support
    ever be added for this, and will it come in the form of SQL AlwaysOn?
    Really appreciate help and input.
    Rgds,
    David

    Database mirroring comes has two modes regarding transactions: synchronous or asynchronous.
    Synchronous requires that the data be committed in both places before releasing the transaction. This has a big performance impact on the FIM Service database and to a lesser extent on the FIM Sync Database.
    Asynchronous means that data isn't committed in both places at the same time, the mirror can fall behind and then in failover you could be behind. In order to have automatic failover with Mirroring you have to be able to modify the connection string to include
    the failover partner or the client has to support getting that data at first logon. While you can modify the FIM database connection strings, it is not understood if FIM is using database clients that support mirroring. I believe it is. Even with asynchronous
    you still have performance hit for copying every transaction to the mirror.
    SQL Always On combines the best of mirroring and clustering to allow you to group databases together into an availability set, and then automatic failover the whole group to another server. It should be noted that Always On makes use of a similar underlying
    mechanisms as mirroring to copy the data -- this is evident when you read that Always on also has an asynchronous and synchronous mode. You will most likely run into the same performance quandary.
    Will the product group add support for it? My guess is that it depends on if they find a good way to address the performance issues.
    David Lundell, Get your copy of FIM Best Practices Volume 1 http://blog.ilmbestpractices.com/2010/08/book-is-here-fim-best-practices-volume.html

  • Not able to create database even with a subscription. (The operation is not supported for your subscription offer type)

    Hi,
    I am trying to create a SQL server database, but are not able to. I get this message: The operation is not supported for your subscription offer type.
    I have to azure accounts and this is only happening in one of them.
    I have created a subscription, but I can see that I have 1250 NOK in credit that is expiring in 29 days.
    Regards
    Christian
    ChristianLLoyd

    Hi Christian,
    The error you saw should only occur for a subscription used with a free trial offer type. Please use the below link to open a support ticket.
    http://azure.microsoft.com/en-us/support/options/
    You can check the following links for similar issues.
    The operation is not supported for your subscription offer type
    Could not submit the request to create database
    DBNAME. The operation is not supported for your subscription offer type
    Thanks,
    Lydia Zhang
    If you have any feedback on our support, please click
    here.
    Lydia Zhang
    TechNet Community Support

  • Pagination support for non-Oracle databases?

    Hi,
    I just read this thread (Pagination Support on pagination support. Is there any way to get pagination with non-Oracle databases? We are using an IBM iSeries / AS/400 DB2 database right now, and we're planning to use some local lightweight database in the near future as well (probably Cloudscape/Derby or "IBM Everyplace database".)
    We currently use code like this:
    String sql = "SELECT art FROM Artikel art" +
                /* dynamically generated where statement is added here */
                + "ORDER BY art.artikelNummer";
    Query q = em.createQuery(sql);
    q.setFirstResult(firstResult);
    q.setMaxResults(maxResults);If I look in the TopLink logs, I see queries like this:
    SELECT ARTNR, ARALT, ARAFJ, ARXII, ARAVJ, ARXIV, ARANJ, AHGCD, ARNVJ, ARCRJ, ARARK, ARFKJ, ARTNK, ARGP1, ASGCD, ARGP2, ARPR1, ARGP3, ARPR2, AREX1, ARPR3, AREX2, ARPR4, AREX3, ARASA, ARINA, ASSCD, ARIA1, ARBAN, ARIN1, ARBAV, ARIA2, ARBAK, ARIN2, ARCES, ARIA3, ARCDT, ARIN3, ARCRE, ARIA4, ARCWK, ARIN4, ARHBH, ARIA5, ARDFA, ARIN5, ARDFG, ARIA6, ARDOS, ARIN6, AREPW, ARINN, ARFOD, ARIAS, ARFOE, ARINS, ARFOF, ARNAB, ARFOI, ARNIB, ARFON, ARNIA, ARFOS, ARNN1, ARFTA, ARNA2, ARVIV, ARNO2, ARGAP, ARNN3, ARGPT, ARNA4, ARGPD, ARNO4, ARGPA, ARNN5, ARGPO, ARNA6, ARHIS, ARNN6, ARISP, ARNIO, ARKHM, ARNNS, MAGCD, AROVJ, MTGCD, ARPL1, ARMXM, ARPL2, MRKCD, ARPL3, ARMVR, ARVKJ, ARMIM, ARV12, ARMDT, ARVVJ, ARMTE, AR#VR, ARMTU, ARZLS, ARMTM, ARIAT, ARMWK, ARAVS, MPCCD, ARNVS, ARBTW, ARFJS, ARXI2, ARG2S, ARXI3, ARE1S, ARXI4, ARE3S, ARXI6, ARIB1, ARXI1, ARIB2, ARXI5, ARIB3, AROPI, ARIB4, ARPRV, ARIB5, SZGCD, ARIB6, ARSPC, ARINO, ARSMF, ARIOS, VEAAN, ARNIS, ARSYN, ARNO1, ARVR1, ARNA3, ARV1S, ARNN4, ARVR2, ARNO5, ARV2S, ARNIN, ARVR3, ARNOS, ARV3S, ARP1S, ARTFA, ARP3S, ARTFG, ARS12, ARUVC, ARZLD, ARUCW, ARAJS, ARBKV, ARCJS, ARVVI, ARG3S, ARVVP, ARINB, VPOCD, ARIO2, VPECD, ARIO4, ARVIH, ARIO6, ARVHG, ARNBS, ARVRW, ARNN2, ARVPR, ARNA5, ARVVR, ARNAS, ARVVS, ARP2S, ARVV1, ARSVV, ARZK1, ARNJS, ARNA1, ARNO3, ARIO1, ARNO6, ARIO5, AROJS, ARE2S, ARVJS, ARIBS, ARIAD, ARIO3, ARG1S FROM ART WHERE ((((ARUVC = 'N') AND (ARHIS = 'N')) AND (ASGCD = 7)) AND (AHGCD = 15)) ORDER BY ARTNR ASC
    (Yeah, I know we have too much columns in the table...)
    So, no pagination in the query. As you can see, we have a mechanism in place to dynamically generate a where clause. This is because the user can set filters. The problem is, if our user sets a filter that causes the result set to be significantly smaller, the performance is way better than when he sets no filter at all. We suppose this is because the whole result set is sent to TopLink, regardless of the values of firstResult and maxResults.
    We are using TopLink Essentials 2.1-10, by the way.
    Message was edited by:
    Bart Kummel

    Hi all,
    I'm trying to subclass <tt>DatabasePlatform</tt> to add pagination support for the AS/400 DB2 database of my customer. To be fair, it is not going very well so far.
    The first problem is, the query Chris found by googling (Re: Pagination support for non-Oracle databases?), does not work for AS/400s version of DB2. In fact, although it is called "DB2", the database on the AS/400 system is a whole other database than the "normal" DB2 database that runs on Windows and *nix. The AS/400 DB2 simply does not have a "ROW_NEXT" function.
    Another option would be to use the <b>row_number() over()</b> mehtod. But, as can be read here, this function is only available from version V5R4 of OS/400. And guess what? We're stuck on V5R3 at this client. (We cannot upgrade, because there's an application in use that's written in Delphi and IBM dropped the Delphi binding from V5R4...)
    So I pretty much ran out of options. On the mailing list I linked to above, someone mentions the option to make a sort of stored procedure that generates a row count number. An example of how to do this can be found here. I implemented it, and ended up with this code:
    package com.myclientsname.persistence;
    import java.sql.Connection;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import org.eclipse.persistence.expressions.ExpressionBuilder;
    import org.eclipse.persistence.internal.databaseaccess.DatabaseCall;
    import org.eclipse.persistence.internal.expressions.ExpressionSQLPrinter;
    import org.eclipse.persistence.internal.expressions.SQLSelectStatement;
    import org.eclipse.persistence.internal.sessions.AbstractSession;
    import org.eclipse.persistence.logging.SessionLog;
    import org.eclipse.persistence.platform.database.DatabasePlatform;
    import org.eclipse.persistence.sessions.SessionProfiler;
    public class AS400Platform extends DatabasePlatform {
        private static final long serialVersionUID = 0L;
        public AS400Platform(){
             super();
             super.setShouldBindAllParameters(false);
        public void printSQLSelectStatement(DatabaseCall call, ExpressionSQLPrinter printer, SQLSelectStatement statement) {
            int max = 0;
            int firstRow = 0;
            if (statement.getQuery()!=null){
                max = statement.getQuery().getMaxRows();
                firstRow = statement.getQuery().getFirstResult();
            if ( !(max>0) && !(firstRow>0) ){
                super.printSQLSelectStatement(call, printer,statement);
                return;
            } else {
                statement.setUseUniqueFieldAliases(true);
                ExpressionBuilder builder = new ExpressionBuilder();
                statement.addField(builder.getField("COUNTER() AS CNTR"));
                printer.printString("SELECT * FROM (");
                call.setFields(statement.printSQL(printer));
                printer.printString(") AS R WHERE R.CNTR >= ");
                printer.printParameter(DatabaseCall.FIRSTRESULT_FIELD);
                if ( max > 0 ){
                    // Use of binding parameters is not allowed here, so use
                    // String concatenation instead...
                    printer.printString(" FETCH FIRST " + max + " ROWS ONLY");
            call.setIgnoreFirstRowMaxResultsSettings(true);
        public boolean wasFailureCommunicationBased(SQLException exception, Connection connection, AbstractSession sessionForProfile){
            if (connection == null || this.pingSQL == null){
                //Without a connection we are  unable to determine what caused the error so return false.
                //The only case where connection will be null should be External Connection Pooling so
                //returning false is ok as there is no connection management requirement
                    //If there is no ping sql then we can not perform the ping.
                return false;
            PreparedStatement statement = null;
            try{
                sessionForProfile.startOperationProfile(SessionProfiler.ConnectionPing);
                if (sessionForProfile.shouldLog(SessionLog.FINE, SessionLog.SQL)) {// Avoid printing if no logging required.
                     sessionForProfile.log(SessionLog.FINE, SessionLog.SQL, getPingSQL(), (Object[])null, null, false);
                statement = connection.prepareStatement(getPingSQL());
                ResultSet result = statement.executeQuery();
                result.close();
                statement.close();
            }catch (SQLException ex){
                try{
                    // Had to add this check because of NullPointerExceptions
                    // (maybe a bug?)
                    if(statement != null){
                        //try to close statement again in case the query or result.close() caused an exception.
                        statement.close();
                } catch (SQLException exception2) {
                    //ignore;
                return true;
            }finally{
                sessionForProfile.endOperationProfile(SessionProfiler.ConnectionPing);
            return false;
    }(As you can see, I had to override the <tt>wasFailureCommunicationBased()</tt> method as well, due to some unexpected NPE's. (A bug, perhaps?))
    This code does work. However, the performance is not very well. The first page comes relatively fast, but as you browse further in the table, each page comes slower. I assume this is because the counter() method has to be evaluated for each row in the table.
    I have to get the performance better and constant. Does anyone have an idea how to optimize this further?
    Best regards,
    Bart Kummel

  • Is it possible to add support for new database type in Data Modeler?

    Hi,
    I see that Data Modeler v.4 supports different versions of Oracle, DB2 and MS SQL. Is it possible to add support for a new database family,
    PostgreSQL for example? I hoped that RDBMS Site editor can do it, but so far I don't see any possibility to add XML files with metadata for a new RDBMS.
    I did it previously for PowerDesigner were it is possible to add and modify definitions for new relational databases.
    Thank you,
    Sergei

    There is discussion option as an out of the box feature. Check this: BI launch pad 4.0: Participate in a discussion about a document

Maybe you are looking for

  • Why do i have duplicate calendars?

    I noticed this morning that I have duplicate calendars showing duplicate events on all of my devices: iMac 2011, MacBook Pro Retina 2014, iPad Air and iPhone 4S. All with latest system updates. Only the calendar for my school events (which I use most

  • HT1386 Unknown error appears when I plug in my Ipod into my computer

    When I plug my Ipod into my computer I get an error message saying, "This Ipod cannot be used because the Apple Mobile Device Service is not started" what does this mean?

  • Storing PDF and Word document in oracle database

    any idea, how to store PDF and word document using oracle database. thanks

  • RE: Help needed regarding ACL in weblogic 6.0

    Abishek, I've also posted this response to the newsgroup as I think there has been some discussion about it before without ever a complete answer. No. You can't use ACLs for servlets or JSPs in 6.0 and later. Prior to 6.0 we used ACLs as there was no

  • File import questions and folder display questions

    I am running LR3 on Win 7 x64. I have a few questions. I have a hard drive assigned to drive letter "B:\" Why doesn't the file imported see this drive in the list of drives? I have to use the "other drives" and browse every time that I want to add fi