Can I create a non-Unicode database manually via create database segment

Hi
As unicode encode use more bytes than 2-bytes encode (for instance, ZHS16GBK), and XE has the limit with 4GB totally. So, can I create a non-Unicode (for instance, ZHS16GBK) database manually via create database segment? or I just could use unicode?
Thanks.
Samuel

Could you load or paste this scripts? Well, the script is (obviously) a shell script, useless on Windows, unless you have some emulator (CygWin, MKS toolkit or similar). The following is the Sql part :
sqlplus /nolog <<END
spool xe_createdb.log
connect sys/oracle as sysdba
startup nomount pfile=$filedir/init$ORACLE_SID.ora
whenever sqlerror exit;
create database
  maxinstances 1
  maxloghistory 2
  maxlogfiles 16
  maxlogmembers 2
  maxdatafiles 30
datafile '$filedir/system.dbf'
  size 200M reuse autoextend on next 10M maxsize 600M
  extent management local
sysaux datafile '$filedir/sysaux.dbf'
  size 10M reuse autoextend on next  10M
default temporary tablespace temp tempfile '$filedir/temp.dbf'
  size 20M reuse autoextend on next  10M maxsize 500M
undo tablespace undo datafile '$filedir/undots1.dbf'
  size 50M reuse autoextend on next  5M maxsize 500M
--character set al32utf8
character set $dbcharset
national character set al16utf16
set time_zone='00:00'
controlfile reuse
logfile '$filedir/log1.dbf' size 50m reuse
       , '$filedir/log2.dbf' size 50m reuse
       , '$filedir/log3.dbf' size 50m reuse
user system identified by oracle
user sys identified by oracle
-- create the tablespace for users data
create tablespace users
  datafile '$filedir/users.dbf'
  size 100M reuse autoextend on next 10M maxsize 5G
  extent management local
-- install data dictionary views:
@?/rdbms/admin/catalog.sql
-- run catblock
@?/rdbms/admin/catblock
-- run catproc
@?/rdbms/admin/catproc
-- run catoctk
@?/rdbms/admin/catoctk
-- run pupbld
connect system/oracle
@?/sqlplus/admin/pupbld
@?/sqlplus/admin/help/hlpbld.sql helpus.sql;
-- run plustrace
connect sys/oracle as sysdba
@?/sqlplus/admin/plustrce
-- Install context
@?/ctx/admin/catctx oracle SYSAUX TEMP NOLOCK;
connect CTXSYS/oracle
@?/ctx/admin/defaults/dr0defin.sql "AMERICAN"
-- Install XDB
connect sys/oracle as sysdba
@?/rdbms/admin/catqm.sql oracle SYSAUX TEMP;
connect SYS/oracle as SYSDBA
@?/rdbms/admin/catxdbj.sql;
connect SYS/oracle as SYSDBA
@?/rdbms/admin/catxdbdbca.sql 0 8080;
connect SYS/oracle as SYSDBA
begin dbms_xdb.setListenerLocalAccess( l_access => TRUE ); end;
-- Install Spatial Locator
connect sys/oracle as sysdba
create user MDSYS identified by MDSYS account lock;
@?/md/admin/catmdloc.sql
create spfile='$filedir/spfile.ora' from pfile
alter user anonymous account unlock
disconnect
-- recompile invalid objects
connect sys/oracle as sysdba
begin dbms_workload_repository.modify_snapshot_settings(interval => 0); end;
begin dbms_scheduler.disable('AUTO_SPACE_ADVISOR_JOB', true); end;
spool off
exit
ENDWords prefixed with $ are OS variables, you have to substitute them with your values.

Similar Messages

  • How to create a non-unicode transport on a unicode system?

    Folks,
    Occassionally, I have to create transports for some of the functions from our unicode-based SAP system. The created transports by default are unicode and thus cannot be installed on a non-unicode SAP client. Is there a way to create non-unicode transport from a unicode SAP system?
    Note that the transport contains only the code (function modules). There is no data.
    Thank you in advance for your help.
    Regards,
    Peter

    Hi Peter,
    Note 638357 - Transport between Unicode systems and non-Unicode systems
    Regards
    Ashok Dalai

  • Unable to Open unix file in UNICODE system which created NON-UNICODE system

    Unable to Open unix file in UNICODE system which created in NON-UNICODE system
    We have two SAP systems both are ECC6.0 but System 1 is NON-Unicode and System2 is Unicode system.
    There is a common unix directory/folder for both system.
    Our requirement is to create one file on unix common folder and write the data to file from system1 .
    In system2 open the same file for appending mode to write the data .
    The file in system 1 created with below sentence.
    OPEN DATASET g_unix_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
    Now I have to append the data from system 2 to same file.
    I have tried to used below statement in system 2 to open the file but sy-subrc value comes as '8'.
    1> OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING UTF-8.
    2>OPEN DATASET g_unix_file FOR APPENDING IN legacy TEXT MODE CODE PAGE
    cdp IGNORING CONVERSION ERRORS  .
    3>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING Default.
    4>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING NON-UNICODE.
    Tried out all the possibilities as per F1 help given for open dataset , but still there is problem with opn file in appending as well output mode.However the file successfully open in Input mode(Read).
    Please advice suggestion to resolve this issue.
    Thanks.

    Messgae captured as 'Permission Denied". The program gets triggered with system user Id PPID.
    How to check the security access of the User ID.

  • How can I create a non-EFI USB installation media

    Hi fellows,
    I usually create a non-EFI installation CD  via this procedure
    https://wiki.archlinux.org/index.php/Un … ical_Media
    But now my optical drive is broken, so I need to create an USB non-EFI installation media.
    How can I do ???

    Hello nTia89,
    I'm in a similar situation to yours. I have an ancient MacBook Pro (1,1) with a broken optical drive, and any attempts to get EFI boot working with Linux have resulted in no graphics.
    My solution has been to first install Mac OS X via USB, then install Refind. After that, Refind will recognise Arch USB media during boot and you can install.
    I'm only booting Linux, so my solution has been to use MBR on the hard drive. That forces the machine to use the BIOS compatability mode.
    Don't hesitate to ask if you have any further questions!

  • Problem publishing database contents from non-unicode to unicode system

    Hello everyone!
    We just set up a new SAP WAS based on Netweaver 2004 as a unicode system. Out problem now is that we have a content management system on our non-unicode system and that we are publishing the contents via rfc to the WAS unicode system to display the contents online. the contents are stored in our own database tables.
    The problem thereby is that many texts pasted from microsoft word contain special characters like bullets, long minus or low-9 quotation marks which are not correctly displayed in the unicode system / on the website. we already found out that it has something to do with the codepage. the sap notes say we should use 1160 instead of 1100 and that the transaction SPUMG would be helpful. but we are not able to select any tables there.
    so now we do not know what to do exactly. do we have to change something in our non-unicode system or do we have to conversion in our unicode system. and what happends if content containing special microsoft word characters is published after the spumg conversion? do we have to to this frequently?
    We would be glad if anyone could help.
    Thanks a lot!

    Hi Martin,
    thanks for your quick answer.
    You got me right. We have a local non-Unicode SAP HCM Netweaver 2004 system running a self-developed web based content management system / wiki. The texts entered in the bsp application are stored in a string field in our database table. Actually we publish the contents to a WAS 6.20 non-Unicode system with the same database tables to provide the content via BSP for the public. Everything is working fine including the special characters.
    Now we want to replace the WAS 6.20 non-Unicode system by a new WAS 7.0/2004 Unicode system. But when publishing the contents via the same RFC function module to the new system the special characters seem to be damaged. We are not able to replace them with abap commands and when they are displayed on the website we only see "boxes".
    If I get you right we have to run SPUMG on our nw 2004 non-unicode productive hcm system, right? but isn't there a danger to damage existing contents?
    Best regards,
    Stefan

  • How to Copy a Non-unicode ECC 6.0 SR3 to WIN2008 and MSSQL2008

    Good morning,
    how can I make a homogeneous system copy of my NON-UNICODE ECC 6.0 (Nw 7.0 SR3) to a new machine running Windows  Server 2008 and MSSQL 2008?
    According to OSS note 1152240 to install SR3 on SQL2008 I have to generate and use  a "SR3 modified installation master DVD".
    My problem is that SR3 installation disk doesn't allow non-unicode installations.
    Can I use SR2 installation disk, and modify it the some way I should do with SR3 installation disk?
    Note 1054740 says "SAP products prior to NetWeaver 7.0 SR3 are not supported on Windows Server 2008", but I would
    not run the SR2 system, just perform the installation and copy my "SR3 database" back. This should not be a problem as the
    diference between SR2 and SR3 is at support package level. Is this correct?
    Can the SR2 installation disk be changed in the some way of the SR3 disk, as described on note 1152240, and be used with
    SQL2008?
    Is there any way to move non-unicode ECC 6.0 to WIN2008 + SQL2008?
    Thank you
    Best regards

    Hi,
    If you are copying you can create a NON Unicode system, The installation for NEW systems is only permited for Unicode but if you are using SAPInst for copying your system, then it will ask if you are copying an Unicode system or not and will left it Non Unicode in the Target. So you could copy your system with no problem I´m pretty sure about this because we are running some copies (8 systems) and we have non unicode systems and we are using SR3 master DVD.
    Just do the chages to the master DVD so you could use it on Win2008
    Have a nice day.

  • Report Script output in UTF-8 code with Non-Unicode Application

    Essbase Nation,
    Report Script output (.txt) file is being coded as UTF-8 when the application is set to Non-unicode. This coding creates a signature character in the first line of the text file, which in turn shows up when we import the file into Microsoft Access. Does anyone know how to change the coding of the output file or know who to remove the UTF-8 signature character.
    Any adive is greatly appreciated.
    Thank you.
    Concerned Admin

    You may be able to find a text editor that can do the conversion. Alternatively, I have converted from one encoding to the another programmatically using Java as well.
    Tim Tow
    Applied OLAP, Inc

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Multilingual on non-unicode App

    We have a Windows non-unicode App talking (via ODBC) to an Oracle 9i/Solaris back end. It has a single database that needs to store address and contact data across Western/Eastern Europe. We need to support a number of code pages.
    Users will only access data encoded in their own code page (and this will be set locally on their Windows sessions). We can encode character data correctly when we load into database and Windows takes care of displaying it correctly.
    Am I correct in assuming that we can use a Single Byte Database Character Set on Oracle? Essentially we want to Stop Oracle doing any character translation either when writing or reading data.

    I don't think you can do this and of course if you could it would not be supported by Oracle. You really should use Unicode and allow Oracle to store and convert the data correctly. If you set the client NLS_LANG the same as the Oracle database character set you are telling Oracle to not convert the data in storing and retreiving. This could potentially allow you to do what you have in mind but it is fraught with issues. The data will not be stored properly. If an application tries to read it that is not dependent on NLS_LANG it will likely see the data as garbage. Later on migrating the data you may also run into problems. Just my 2 cents

  • Upgrade 4.6C to ECC 6.0; Unicode - non-Unicode -- Unicode

    Hi,
    My client has a R/3 4.6c UNICODE, he wants to upgrade to ECC6.0 UNICODE.
    According to a SAP document, to upgrade, I upgrade directly to ECC 6.0 NON_UNICODE then convert the system to ECC 6.0 UNICODE.
    I'm wonder why Unicode -> non-Unicode --> Unicode.
    Regards,
    Toan Do

    "4.6C is not a unicode system ,ecc6.0 is an unicode system."
    Actually, ECC 6.0 only has to be unicode if you are a new installation.  If you are upgrading from a non-unicode system, you can upgrade to a non-unicode ECC 6.0 system, this is fully supported by SAP.
    However, it is recommended that you perform the upgrade and unicode conversion at the same time. SAP is going to force us to unicod some day anyway.
    We have just completed a technical upgrade from 4.7 to ECC 6 non-unicode, and now we are looking at a unicode conversion project.  It would have added time to the original upgrade project, but it would have been worth it.
    T

  • Non-Unicode Support Post NetWeaver 2004s

    Hi everyone,
    Our site runs an existing non-unicode environment, and are aware that we can definetly upgrade our existing environments to NetWeaver 2004s (Basis 7.0).
    However will SAP provide future upgrade kits for later releases of SAP (eg Basis 8.x and higher) for non-unicode environments.
    Cheers
    Shaun

    Thanks Matt and Eddy for your replies.
    I had already read the "blog" Matt, and Eddy I have spent a heap of time in the service market place reading everything thats available on unicode including oss notes.
    However here is a snippit from the blog the Matt directed me too:
    ========================================================
    2. This change only really effects NEW installations of SAP NetWeaver and SAP applications based on SAP NetWeaver that were previously available with non-Unicode installation options.
    3. Existing installations of SAP NetWeaver and SAP applications based on SAP NetWeaver that were based on single code pages can be upgraded to the new releases WITHOUT having to convert to Unicode. This is really cool as it protects your existing investments and you can live in the non Unicode world for a <b>little longer</b>.
    ========================================================
    Note on "3." that is says at the end there "for a little longer". ... how long?  too what release?
    From what I have read so far it sounds like anything above NetWeaver Application Server 7.0 will require a unicode conversion.
    Any comments??
    Ideally I would like someone to just give me a yes/no answer.
    Cheers
    Shaun

  • Can we make ABAP programs Unicode enable after  SAP sys is converted to uni

    Hello Experts,
    Can we convert the Non Unicode ABAP programs to Unicode after upgrading non unicode SAP system to Unicode?
    Is there any serious problem?
    If Non Unicode SAP is upgraded to Unicode without converting all non unicode ABAP programs to Unicode.
    Thanks in advance.
    Hari

    Hi
    There is no need of correcting the programs from Non unicode to Unicode
    After migrating the system from non Unicode to Unicode you have to run certain Notes (which will be done by Basis) to take care of this.
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • See Unicode  and Volume in my database

    Hi
    How can I to see the Unicode and Volume in my database ?
    thank
    Message was edited by:
    muttleychess

    For example:
    select * from nls_database_parameters where parameter like '%SET%';gives the character sets used by your database.
    select sum(bytes)/(1024*1024*1024) from v$datafile; gives the size of all datafiles (in GB) of your database (excluding tempfiles, control files and redo logs files).
    select sum(bytes)/(1024*1024*1024) from dba_segments;give the size of all database objects (in GB) in your database.
    Message was edited by:
    Pierre Forstmann

  • Can Unicode system  have Non Unicode Database

    i have installed Nw2004 Unicode .
    But if i install NW2004 unicode the database is also unicode or not

    Hi,
    Unicode and non unicode depends on how many bytes are reserved at database
    if itis 1byte it is non unicode supports only english and germany
    2 bytes it is non-unicode
    so the DB is created with an Unicode Characterset once you install sap as unicode system
    Samrat

  • Can we read an oracle non-unicode database to an sap unicode dataset????

    Hi
    Can we read an oracle non-unicode database to an sap unicode environtment using open dataset.,transfer....using a connection??
    Regards

    Hello Jacques,
    please check sapnote #808505 (Secondary connection to Oracle DB w/ different character set).
    Regards
    Stefan

Maybe you are looking for

  • Overriding automatic organization of iTunes music in Finder -- help please

    Hello. I'm trying to organize the music in my iTunes library in the Finder (in the folder at .../[user]/Music/iTunes/iTunes Music). However, when I drag music into the library in the iTunes application, which makes it appear in that folder in the Fin

  • Video Filters effect on footage

    If I capture footage inverted, like from a 35mm adapter, and rotate it twice so it is right side up does the render effect the quality of the footage? it seems it shouldn't, but i'd like to know for sure. thanks vince

  • FMS- Journal Entry and AP Invoice

    2007B I ve created a UDF 'AP Remarks' in the JE Header level. I need the data in 'remarks' field of the AP invoice to be copied to the UDF in JE what is the FMS query for that?

  • N95 firmware 21.0.001 : When??

    Hi, Do you have some news about release of last N95 firmware 21.0.001? Thanks

  • Photoshop CS6 - 'Two computers at same time' question

    Hi guys. I've tried searching for this question before but I couldn't find a definitive answer to it in the threads (maybe that's for a reason!).  Now, clearly the TOS stipulate about using PS on one computer at a time.  I understand that, as otherwi