Convert a Non-UTF8 Database to UTF8

Hi,
We currently have a database setup to use a non-utf8 characters set, I believe it is set up for western european languages only. We now want to expand our database to include support for other languages such as arabic and japanese. To do this we would like to make a copy of our existing database and change the charecter set to UTF8.
What sort of implications will this have for our existing table structures, packaged code etc?
Is there a documented way that this can be done?
Any advice would be greatly appreciated.
Thanks.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1486

Similar Messages

  • Peoplesoft convert Oracle non-unicode database to unicode database

    I am following doc 1437384.1 to convert a Peoplesoft database from non-unicode database to unicode database
    I use the following export statement (as user PS)
    SET NO TRACE;
    SET OUTPUT output_file.dat;
    SET NO DATA;
    EXPORT *;
    And the following import statement (as user sysadm)
    SET NO TRACE;
    SET NO DATA;
    SET INPUT output_file;
    SET LOG log_file;
    SET UNICODE ON;
    SET STATISTICS OFF;
    SET ENABLED_DATATYPE 9.0;
    IMPORT *;
    Before I do the datapump import, I am comparing the objects
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 33797
    LOB 2775
    TABLE 28829
    TRIGGER 9
    VIEW 21208
    on oracsc63 (targetdb):
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 23748
    LOB 2170
    TABLE 19727
    I don't have the same number of object. When I do the import this means that around 10000 tables will not have the UTF-8 format.
    Any ideas how I can solve this? How has experience with this peoplesoft conversions?

    Hello Jacques,
    please check sapnote #808505 (Secondary connection to Oracle DB w/ different character set).
    Regards
    Stefan

  • Cannot run a UNICODE kernel against a non-UTF8 database

    Hi,
    I am trying to install SAP ECC 6.0 SR2 . am using  windows 2003 server oracle 10g db.
    please help me how to resolve this...
    sapparam: sapargv( argc, argv) has not been called.
    sapparam(1c): No Profile used.
    sapparam: SAPSYSTEMNAME neither in Profile nor in Commandline
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: START OF LOG: 20100717144107
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#13 $ SAP
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: version R7.00/V1.4 [UNICODE]
    Compiled Jul 17 2007 01:28:45
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe -testconnect
    DbSl Trace: ORA-1403 when accessing table SAPUSER
    DbSl Trace: Cannot run a UNICODE kernel against a non-UTF8 database (charset = AL32UTF8)
    (DB) ERROR: db_connect rc = 256
    DbSl Trace: Default conn.: already connected to DEV
    (DB) ERROR: DbSlErrorMsg rc = 29
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: job finished with 1 error(s)
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: END OF LOG: 20100717144108

    Reinstallae the server .problem solved thanks to 3everyone

  • Non-UTF8 DADs and PlsqlTransferMode RAW

    Hey, so here's our situation. Oracle DB 10g and OAS 10g 9.0.4.2. We have a lot of DAD-based PL/SQL apps, with DADs set up per language. Some time ago, we converted the databases to UTF8. Everything worked pretty much fine, except for all the errors about language conversion in our logs. We've been getting our app developers/site managers to slowly move over to using new UTF8 DADs, but many of the old language-specific DADs are in use.
    Well, some of the pages had the problem with mismatched content length (see Note:244544.1) but no one saw that as a big deal - until this weekend, when we moved over to a new load balancer, and the LB is waiting for the page to "finish" before passing it on to the user, making all our non-UTF8 DAD apps hang when using non-English languages. Big production problem.
    So the docs seem to say that the magic solution for me is to set all my non-UTF8 DADs to PlsqlTransferMode RAW. I've tested this and it resolves the problem at hand, but it scares all the application managers here who worry that it'll have some kind of side effects.
    So question to all - in a reasonably complex environment, with a heavily international Internet user base (wide variety of browsers, character settings, etc) - could there be any unforseen impact from changing from CHAR to RAW for the transfer mode? Or should it be utterly transparent to the end user, given that all the data is stored in the DB in utf8?
    Thanks,
    Ernest

    There is something similar before:
    wwv_flow.accept was not found on this server
    what lead to some DAD changes....revert is helping ?
    Regards,
    Damir

  • Unicode data in non-utf8 oracle 8.1.7

    Hi,
    I have to migrate unicode data from a UTF-8 Oracle 9.0.2 database to non-utf8 oracle 8.1.7. The tables are small and I am reading and writing into the database using java code.The column which contained the unicode data have been made nchar in oracle 8.1.7.
    When I try to insert the data,I get the error:
    java.sql.SQLException: ORA-12704: character set mismatch
    Can I have unicode data stored in nchar columns in a non-utf8 database?
    Is there any documentation available on the same?
    Thanks,
    Shipra

    Check out the Oracle Unicode Database Support paper on OTN - http://technet.oracle.com/tech/globalization/content.html
    Basically NCHAR prior to Oracle9i can not be Unicode. If you need to store Unicode data in 8.1.7, you need to use UTF8 as the database character set.
    Nat

  • [svn:bz-trunk] 20753: * Fixed non UTF8 compliant char in EndpointPushNotifier.java

    Revision: 20753
    Revision: 20753
    Author:   [email protected]
    Date:     2011-03-10 02:40:52 -0800 (Thu, 10 Mar 2011)
    Log Message:
    Fixed non UTF8 compliant char in EndpointPushNotifier.java
    Added tomcat7 support to the maven build of blazeds opt (to support security/Tomcat7Valve.java)
    tested the build with maven3
    Modified Paths:
        blazeds/trunk/modules/core/src/flex/messaging/client/EndpointPushNotifier.java
        blazeds/trunk/modules/opt/pom.xml
        blazeds/trunk/modules/opt/poms/tomcat4/pom.xml
        blazeds/trunk/modules/opt/poms/tomcat6/pom.xml
        blazeds/trunk/modules/pom.xml
    Added Paths:
        blazeds/trunk/modules/opt/poms/tomcat7/
        blazeds/trunk/modules/opt/poms/tomcat7/pom.xml
    Property Changed:
        blazeds/trunk/modules/
        blazeds/trunk/modules/common/src/
        blazeds/trunk/modules/core/src/
        blazeds/trunk/modules/remoting/src/

  • Errors while performing non-Unicode database export

    Hi,
    I am exporting of a non-unicode database (to perform a unicode conversion). The export has completed without any problems, but I frequently got the following messages in logfile.
    Error 1- UMGCOMCHAR read check skip, no data found; probably old SPUMG
    Error 2- enviornment variable I18N_POOL_WIDTH is not set. Checks are active
    Error 3- I18N_NAMETAB_TIMESTAMPS not in env: checks are ON (Note 738858)
    My query is that
    1- Are these above 3 errors an issue.
    2- When I import already exported files into a unicode database, will it cause any problem or loss of data.
    3- What is the fix to this issue.
    Points to be awarded for any kind of small help.
    Thanks

    Depending on the data that you have in your non-unicode database, potentially data expansions or data loss may occur.
    Data expansions
    For example, a 1 byte character in a VARCHAR2(1) column may expand to 2 bytes or 3 bytes in a Unicode (UTF8) database; hence you may need to re-define you scheme prior to importing the data into your new Unicode database.
    Data Loss
    This happens only if you have invalid characters inside your non-unicode database. For example, you may have some 8-bit non-ASCII characters inside your US7ASCII database, during export these characters will be converted to some replacement characters (?).
    However you can use the character set scanner (csscan) to scan your source database to detect both of the above scenarios. Please visit the Globalization Support section of OTN for more info - http://technet.oracle.com/tech/globalization/content.html
    Regards
    Nat

  • Unicode/non-unicode database

    Hi,
    I have two Oracle 8.1.7 databases : an unicode database, and a non-unicode database.
    Can I export schema from non-unicode database to unicode database without problem ? What is the real impact ?
    Thank you in advance for your help,
    Nicolas.

    Depending on the data that you have in your non-unicode database, potentially data expansions or data loss may occur.
    Data expansions
    For example, a 1 byte character in a VARCHAR2(1) column may expand to 2 bytes or 3 bytes in a Unicode (UTF8) database; hence you may need to re-define you scheme prior to importing the data into your new Unicode database.
    Data Loss
    This happens only if you have invalid characters inside your non-unicode database. For example, you may have some 8-bit non-ASCII characters inside your US7ASCII database, during export these characters will be converted to some replacement characters (?).
    However you can use the character set scanner (csscan) to scan your source database to detect both of the above scenarios. Please visit the Globalization Support section of OTN for more info - http://technet.oracle.com/tech/globalization/content.html
    Regards
    Nat

  • Xml data into non-xml database.. solution anyone?

    Hi,
    My current project requires me to store the client's data on our servers. We're using Oracle9i. Daily, I will download the client's data for that day and load it into our database. My problem is that the data file is not a flat file so I can't use sql*loader to load the data. Instead, the data file is an xml file. What is the best way to load xml data into a non-xml database? Are there any tools similar to sql*Loader that will load xml data into non-xml database? Is it the best solution for the client to give me an XML dump of their data to load into our database, or should I request a flat file? My last resort would be to write some sort of a script to parse the xml data into a flat file, and then run it through sql*loader. Is this the best solution? One thing to note is that these files could be very large.
    Thanks in advance.
    -PV

    I assume that just putting the XML file into an
    extremely large VARCHAR field is not what you want.
    Instead, you want to extract data elements from the
    XML and write them to columns in a table in your
    database. Right?Yes. Your assumption is correct.
    It sounds like you already have a script that loads a
    flat file into your database. In that case I would
    write an XSL transformation that converts the client's
    XML into a correctly-formatted flat file.Thank you. I'll look into that. Other suggestions are welcome.

  • Directory Objects exists twice after converting a NON-CDB to a PDB

    After converting a NON-CDB to a PDB (originally Oracle 11g Database) I've encountered that the default directory objects like ORACLE_OCM_CONFIG_DIR exists twice. One with ORIGIN_CON_ID 1 and one with ORIGIN_CON_ID 3. That's not a real issue here except for the DATA_PUMP_DIR. The same directory name is used for two different Directory path (a new one with the path of the CDB and an old one with the path of the origin NON-CDB). With some tests I found that the "old" one is going to be used within the PDB but I don't know how to clean up the structure. Within the CDB$ROOT I can only drop the new directory and in the pdb I get the error message "ORA-65040: operation not allowed from within a pluggable database"
    Any idea?

    After converting a NON-CDB to a PDB (originally Oracle 11g Database) I've encountered that the default directory objects like ORACLE_OCM_CONFIG_DIR exists twice. One with ORIGIN_CON_ID 1 and one with ORIGIN_CON_ID 3. That's not a real issue here except for the DATA_PUMP_DIR. The same directory name is used for two different Directory path (a new one with the path of the CDB and an old one with the path of the origin NON-CDB). With some tests I found that the "old" one is going to be used within the PDB but I don't know how to clean up the structure. Within the CDB$ROOT I can only drop the new directory and in the pdb I get the error message "ORA-65040: operation not allowed from within a pluggable database"
    Any idea?

  • I have taken off/turned off iCloud on my mac mini but when I write an email   and use contacts it will convert a non iCloud email to and iCloud email automatically.  I really don't want this. Any way to stop this automatic conversion?

    I have taken off/turned off iCloud on my mac mini (OS 10.8) but when I send an email and use contacts , it will convert the non - iCloud email to an iCloud email automatically.  Anyway to stop this automatic conversation?    

    Robert...
    the iCloud webserver wont accept my password for a .mac login, nor will it allow me to change it
    See if you can change your password >  Apple - My Apple ID
    If that doesn't help, launch iTunes on your computer.
    From the iTunes menu bar click iTunes / Preferences then select the Advanced tab.
    Click: Reset warnings and Reset cache
    Click OK.
    Restart your computer.
    If that that doesn't help...
    Moreover, when I try to go into my .mac account on the web,
    Delete all apple cookies and empty your browser cache.
    See if  you can access your account at iCloud.com

  • Can Jdeveloper Be Used For Non-Oracle Databases

    I have been trying to evaluate Jdeveloper 9i and Jbuilder 7 Enterprise for Swing database development. I am particularly interested in the productivity enhancements such as BC4J and Jclient. The underlying database might be Oracle, SapDb (excellent, easy to use, and free), SQLServer, etc.
    I evaluated Jbuilder Enterprise tools and it worked flawlessly with SapDb. This emphasized using their DataExpress and DbSwing components which provide many useful capabilities similar to BC4j and Jclient. It also involved using their DBPilot tool which allows browsing similar to that provided via a Jdeveloper connection.
    I tried to use Jdeveloper for the same SapDB and it is essentially non-functional. I followed the instructions for using a non-default driver and tried to define a connection. It behaved inconsistently: often saying no suitable driver can be found and yet when you edit the connection and test without making any changes, it works. If you try to establish a BC4J definition, it is very inconsistent and fails to recognize important details as foreign keys. Even if you define a ViewLink manually, it still does not work properly if you attempt to define a Master-Detail Jclient form. As I have stated, all these types of capabilities worked flawlessly in their Jbuilder equivalent. Furthermore, I really like the fact that Jbuilder gives many of BC4Js benefits without needing a BC4J J2EE container.
    Has anyone had real success using Jdeveloper's advanced features to develop for non-Oracle databases and if so, how did you get around these types of problems?

    Hi,
    generally, SCAN can be used for 10g databases and you discovered the first half: for 10g databases you will have to modify the REMOTE_LISTENER entry for each 10g database instance to point to the SCAN listeners (as opposed to pointing to the remote local listeners, which is the default in 10g). You could even have the databases registers themselves with SCAN and the remote listeners, if you wanted to... It's more or less a matter of configuration. But for simplification reasons, I will stick to the case where you have your 10g databases register with the SCAN listeners only.
    Now the other half is the client and the client configuration. An 11g Rel. 2 client configured for RAC would have a TNSNAMES entry that has only one address line for the RAC databases. The host entry in this one address line should point to the SCAN (the SCAN name is ideally resolved in DNS). A 10g client configured for RAC would have as many address lines in the TNSNAMES as you have nodes in the cluster.
    The 10g client SCAN configuration would then be in the middle so to speak: You would have 3 address lines in your TNSNAMES, in which each host entry would resolve to one SCAN address (I assume you will use the recommended default of 3 SCAN IPs). If you choose, you can have a name resolution for each of your SCAN IPs, but this would not be required. Now, why would you do it this way? Because this configuration will always work and does not make you dependent on certain functionality that your DNS server may or may not offer.
    For the remaining questions: SCAN is a DNS entry resolving one name to more than one (typically 3) IP addresses. OID is short for Oracle Internet Directory, which is a complete LDAP server. And you are right that there is no document how to configure 10g clients for SCAN from Oracle yet. However, there is a quite good document on SCAN on otn.oracle.com/rac, but I am sure you are aware of it already.
    Hope that helps. Thanks,
    Markus

  • [Solved] if(Transaction specified for a non-transactional database) then

    I am getting started with BDBXML 2.4.14 transactions and XQuery update functionality and I am having some difficulty with 'node insert ...' and transactions failing with 'Transaction specified for a non-transactional database'
    Thanks for helping out.
    Setup:
    I have coded up a singleton manager for the XmlManger with a ThreadLocal holding the transaction and a query method to execute XQueries. The setup goes like this:
    environmentConfig = new EnvironmentConfig();
    environmentConfig.setRunRecovery(true);               environmentConfig.setTransactional(true);               environmentConfig.setAllowCreate(true);               environmentConfig.setRunRecovery(true);               environmentConfig.setInitializeCache(true);                environmentConfig.setTxnMaxActive(0);               environmentConfig.setInitializeLocking(true);               environmentConfig.setInitializeLogging(true);               environmentConfig.setErrorStream(System.err);
    environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE);               environmentConfig.setJoinEnvironment(true);               environmentConfig.setThreaded(true);
    xmlManagerConfig = new XmlManagerConfig();               xmlManagerConfig.setAdoptEnvironment(true);               xmlManagerConfig.setAllowAutoOpen(true);               xmlManagerConfig.setAllowExternalAccess(true);
    xmlContainerConfig = new XmlContainerConfig();               xmlContainerConfig.setAllowValidation(false);               xmlContainerConfig.setIndexNodes(true);               xmlContainerConfig.setNodeContainer(true);
    // initialize
    instance.xmlManager = new XmlManager(instance.getEnvironment(),                    instance.getXmlManagerConfig());
    instance.xmlContainer = instance.xmlManager.openContainer(                              containerName, instance.getXmlContainerConfig());
    private ThreadLocal<XmlTransaction> transaction = new ThreadLocal<XmlTransaction>();
    public XmlTransaction getTransaction() throws Exception {
              if (transaction.get() == null) {
                   XmlTransaction t = xmlManager.createTransaction();
                   log.info("Transaction created, id: " + t.getTransaction().getId());
                   transaction.set(t);
              } else if (log.isDebugEnabled()) {
                   log.debug("Reusing transaction, id: "
                             + transaction.get().getTransaction().getId());
              return transaction.get();
         private XmlQueryContext createQueryContext(String docName) throws Exception {
              XmlQueryContext context = xmlManager.createQueryContext(
                        XmlQueryContext.LiveValues, XmlQueryContext.Lazy);
              List<NamespacePrefix> namespacePrefixs = documentPrefixes.get(docName);
              // declare ddi namespaces
              for (NamespacePrefix namespacePrefix : namespacePrefixs) {
                   context.setNamespace(namespacePrefix.getPrefix(), namespacePrefix
                             .getNamespace());
              return context;
         public XmlResults xQuery(String query) throws Exception {
              XmlQueryExpression xmlQueryExpression = null;
              XmlQueryContext xmlQueryContext = getQueryContext(docName);
              try {
                   xmlQueryExpression = xmlManager.prepare(getTransaction(), query,
                             xmlQueryContext);
                   log.info(query.toString());
              } catch (Exception e) {
                   if (xmlQueryContext != null) {
                        xmlQueryContext.delete();
                   throw new DDIFtpException("Error prepare query: " + query, e);
              XmlResults rs = null;
              try {
                   rs = xmlQueryExpression.execute(getTransaction(), xmlQueryContext);
              // catch deadlock and implement retry
              catch (Exception e) {
                   throw new DDIFtpException("Error on query execute of: " + query, e);
              } finally {
                   if (xmlQueryContext != null) {
                        xmlQueryContext.delete();
                   xmlQueryExpression.delete();
              return rs;
    <?xml version="1.0" encoding="UTF-8"?>
    <Test version="0.1">
    <Project id="test-project" agency="dda">
    <File id="large-doc.xml" type="ddi"/>
    <File id="complex-doc.xml" type="ddi"/>
    </Project>
    <Project id="2nd-project" agency="test.org"/>
    </Test>
    Problem:
    All the queries are run through the xQuery method and I do delete the XmlResults afterwards. How do I get around the 'Transaction specified for a non-transactional database' what is the transactions doing? How do I get state information out of a transaction? What am I doing wrong here?
    1 First I insert a node:
    Transaction created, id: -2147483647
    Adding document: large-doc.xml to xml container
    Reusing transaction, id: -2147483647
    Working doc: ddieditor.xml
    Root element: Test
    Reusing transaction, id: -2147483647
    insert nodes <Project id="JUnitTest" agency="test.org"></Project> into doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test
    Reusing transaction, id: -2147483647
    2 Then do a query:
    Reusing transaction, id: -2147483647
    doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
    Reusing transaction, id: -2147483647
    3 The same query again:
    Reusing transaction, id: -2147483647
    doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
    Reusing transaction, id: -2147483647
    4 Delete a node:
    Reusing transaction, id: -2147483647
    delete node for $x in doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project where $x/@id = '2nd-project' return $x
    Reusing transaction, id: -2147483647
    5 Then an error on query:
    Reusing transaction, id: -2147483647
    doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
    Reusing transaction, id: -2147483647
    Transaction specified for a non-transactional database
    com.sleepycat.dbxml.XmlException: Error: Invalid argument, errcode = DATABASE_ERROR
         at com.sleepycat.dbxml.dbxml_javaJNI.XmlResults_hasNext(Native Method)
         at com.sleepycat.dbxml.XmlResults.hasNext(XmlResults.java:136)
    Message was edited by:
    jannikj

    Ok got it solved by increasing the locks lockers and mutex's I allso increased the the log buffer size:
    environmentConfig = new EnvironmentConfig();
                   // general environment
                   environmentConfig.setAllowCreate(true);
                   environmentConfig.setRunRecovery(true); // light recovery on startup
                   //environmentConfig.setRunFatalRecovery(true); // heavy recovery on startup
                   environmentConfig.setJoinEnvironment(true); // reuse of environment: ok
                   environmentConfig.setThreaded(true);
                   // log subsystem
                   environmentConfig.setInitializeLogging(true);
                   environmentConfig.setLogAutoRemove(true);
                   environmentConfig.setLogBufferSize(128 * 1024); // default 32KB
                   environmentConfig.setInitializeCache(true); // shared memory region
                   environmentConfig.setCacheSize(2500 * 1024 * 1024); // 250MB cache
                   // transaction
                   environmentConfig.setTransactional(true);
                   environmentConfig.setTxnMaxActive(0); // live forever, no timeout               
                   // locking subsystem
                   environmentConfig.setInitializeLocking(true);
    environmentConfig.setMutexIncrement(22);
    environmentConfig.setMaxMutexes(200000);
    environmentConfig.setMaxLockers(200000);
    environmentConfig.setMaxLockObjects(200000); // default 1000
    environmentConfig.setMaxLocks(200000);
    // deadlock detection
                   environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE);
    In the docs by Oracle it is limited information given regarding the impact of these settings and their options. Can you guys point in a direction where I can find some written answers or it hands on?

  • How to I convert data from oracle database into excel sheet

    how to I convert data from oracle database into excel sheet.
    I need to import columns and there datas from oracle database to microsoft excel sheet.
    Please let me know the different ways for doing this.
    Thanks.

    asktom.oracle.com has an excellent article on writing a PL/SQL procedure that dumps data to an Excel spreadsheet-- search for 'Excel' and it'll come up.
    You can also use your favorite connection protocol (ODBC, OLE DB, etc) to connect from Excel to Oracle and pull the data out that way.
    Justin

  • When converting a PDF to an excel file, the format converts but none of the wording comes across

    When converting a PDF file to an excel file, the format converts but none of the wording comes across

    When converting a PDF file to an excel file, the format converts but none of the wording comes across

Maybe you are looking for

  • ImageReady...creating animated GIF, how do you change save settings?

    I cant find an ImageReady forum, hope this will do for now... Its been forever since I have used ImageReady and Im rusty again!  I just created an animated GIF but want to lower the file size of it a little after it saves everything to GIF.  Currentl

  • How to change your email on iCloud

    How do u change your email address on iCloud?

  • PM Order Operations Default Unit of Measurement

    Halo Experts, We have a problem in UOM for PM/CS order operations. We have defined HR as default value of UOM for operations in IMG. Business wants to change at the order level to H as UOM and it is manually keyed in and saved. But when we check with

  • I am not getting User Environment Variable Value

    Hi Team, I have been Trying to recover variables values using an anonym procedure from Windows XP SP3 I have already executed following procedure. BEGIN DECLARE gf_filelog UTL_FILE.file_type; v_file_log VARCHAR2 (1024) := ' '; gv_path_log VARCHAR2 (1

  • Error in Proxy generation.

    When I go to SPROXY transaction. It gives me an error saying "No connection to integration builder(only local data visible)" and doesn't show anything (software componenets & all) there. Also, when I try to read the help document there and try to run