Connect as: normal with Oracle 12c and Toad 12 FAILS.

Connect as: normal with Oracle 12c and Toad 12 FAILS.
I've been looked at forums, blogs and wikies but I couldn't found why my new user, that I created before, can't connect as NORMAL but as SYSDBA it can.
I don't know why this occurs.
My intention is create a user/schema in a local DB to have my own db.
I can create a new user but when I go to connect to this new schema I only connect as SYSDBA, this is a big trouble because I can "see" object's DBA and I don't want this, I want create my own new schema db and not share name's object or name's tables with DBA.
For example, object JOB exists as SYSDBA and I can't create a new table with this name.
Please, help me.
Thanks very much for read me.

Hello rp0428!!
I'll show you all step that you wrote.
In sql plus:
sho con_name
CON_NAME
CDB$ROOT
SELECT name, created, open_mode FROM v$database;
NAME CREATED                OPEN_MODE
ORCL 23/07/2013 15:59:44 READ WRITE
SELECT username, account_status, lock_date, expiry_date FROM dba_users WHERE USERNAME like '%IMEI%' ORDER BY 1;
username       account_status  lock_date     expiry_date
IMEILOCAL   OPEN                                    22/01/2014 12:20:25
SELECT USERNAME,CON_ID,USER_ID FROM CDB_USERS WHERE USERNAME like '%IMEI%';
USERNAME     CON_ID   USER_ID
IMEILOCAL      3               117
select GRANTEE,con_id from cdb_ROLE_PRIVS where GRANTED_ROLE='CONNECT' AND GRANTEE LIKE '%IMEI%';
GRANTEE                  con_id
IMEILOCAL                3
SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
NAME       CON_ID    DBID                CON_UID         GUID
ORCLC    3               2835062256     2835062256     14236144864B451C8E04D5C6453034FA
To create my user I did:
select con_id,dbid,NAME,OPEN_MODE from v$pdbs;
2    4064112103    PDB$SEED    READ ONLY
3    2835062256    ORCLC      MOUNTED
alter PLUGGABLE database ORCLC open;
select con_id,dbid,NAME,OPEN_MODE from v$pdbs;
--2    4064112103    PDB$SEED    READ ONLY
--3    2835062256    ORCLC      READ WRITE
alter session set container=ORCLC;
CREATE TABLESPACE DATA3 DATAFILE 'C:\app\X05699SA\oradata\orcl\DATA3.dbf' SIZE 50M;
CREATE USER IMEILOCAL IDENTIFIED BY IMEILOCAL DEFAULT TABLESPACE DATA3 TEMPORARY TABLESPACE TEMP PROFILE DEFAULT ACCOUNT UNLOCK;
select username, password, created, password_versions, default_tablespace from dba_users where username like '%IMEI%';
username      password     created                         password_versions    default_tablespace
IMEILOCAL                       24/07/2013 16:41:35   11G                             DATA3
GRANT CREATE SESSION TO IMEILOCAL;
GRANT CREATE TABLE TO IMEILOCAL;
GRANT CREATE VIEW TO IMEILOCAL;
GRANT CREATE procedure TO IMEILOCAL;
GRANT CREATE trigger TO IMEILOCAL;
GRANT CONNECT TO IMEILOCAL;
GRANT CREATE SEQUENCE to IMEILOCAL;
GRANT create any context TO IMEILOCAL;
GRANT create public synonym TO IMEILOCAL;
GRANT execute on dbms_rls TO IMEILOCAL;
GRANT administer database trigger TO IMEILOCAL;
SELECT user, osuser, terminal, program FROM gv$session WHERE sid = (SELECT sid FROM v$mystat WHERE rownum = 1);
user     osuser                                 terminal                      program
SYS    SECTORIALES\X05699SA MX3500906DC1549 Toad.exe
SELECT u.username, u.default_tablespace, u.temporary_tablespace "TMP TBS", u.profile, r.granted_role,
r.admin_option, r.default_role
FROM sys.dba_users u, sys.dba_role_privs r
WHERE u.username = r.grantee and u.username like '%IMEI%'
GROUP BY u.username, u.default_tablespace, u.temporary_tablespace, u.profile, r.granted_role,
r.admin_option, r.default_role;
IMEILOCAL USERS TEMP DEFAULT CONNECT NO YES
SELECT tablespace_name FROM dba_tablespaces;
SYSTEM
SYSAUX
TEMP
USERS
EXAMPLE
DATA3
SELECT name, password FROM user$ where name like '%IMEI%';
IMEILOCAL passwordx
select USERNAME, USER_ID, CREATED,                      COMMON, ORACLE_MAINTAINED from all_users WHERE username like '%IMEI%' order by 1;
          IMEILOCAL    117           24/07/2013 16:41:35       NO              N
select * from user$ where name like '%IME%';
117 IMEILOCAL 1 passwordx 3 2 24/07/2013 16:41:35 26/07/2013 12:20:25   0  1   0 0 DEFAULT_CONSUMER_GROUP  0   S:566C0A818AC42C203D49706D3586926A7656F5B16AA6C37E8FE10A1F779B;H:6FB057BA9F5B0690B93FD9A20695654D      
Here you're my tnsnames.ora
# tnsnames.ora Network Configuration File: C:\app\X05699SA\product\12.1.0\dbhome_1\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
LISTENER_ORCL =
  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
ORACLR_CONNECTION_DATA =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
    (CONNECT_DATA =
      (SID = CLRExtProc)
      (PRESENTATION = RO)
ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.iecisa.corp)
# Here you're my listener.ora Network Configuration File: C:\app\X05699SA\product\12.1.0\dbhome_1\network\admin\listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = CLRExtProc)
      (ORACLE_HOME = C:\app\X05699SA\product\12.1.0\dbhome_1)
      (PROGRAM = extproc)
      (ENVS = "EXTPROC_DLLS=ONLY:C:\app\X05699SA\product\12.1.0\dbhome_1\bin\oraclr12.dll")
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
I think my problem source from about CDBs and PDBs, that is new in Oracle 12c and I'm not configure correcly my new DB and neither my new User
Thanks for all!!!

Similar Messages

  • Inconsistence with oracle express and toad

    I created two accounts(A and B) under system account under oracle express, then under system, A and B account, I created one table for each account (USER_TABLE) . I can see all three tables from oracle express. But I can't see the tables from toad. Even I can't see A andB schema from toad. Anyone knows what's the problem.
    Thanks,
    -- Allen --

    I think I access to the correct db since there is only one db instance. Only system has the DBA privilege, A and B just regular account. The accounts I created from oracle express should reflect to toad, I can't see them, but I can see HR schema from toad.
    Thanks,
    -- Yu Hu --

  • Connection pool error with oracle 11g and weblogic 10

    Hi,
    my code is:
    public Connection getConnection() {
              properties = new Properties();
              properties.put(Context.INITIAL_CONTEXT_FACTORY,
                        "weblogic.jndi.T3InitialContextFactory");
              //properties.put(Context.SECURITY_PRINCIPAL, "weblogic");
              //properties.put(Context.SECURITY_CREDENTIALS, "weblogic");
              properties.put(Context.PROVIDER_URL, "t3://172.23.61.214:7001/");
              try {
                   initialContext = new InitialContext(properties);
                   datasource = (DataSource) initialContext.lookup("sample_jndi");
                   try {
                        connection = datasource.getConnection();
                   } catch (SQLException e) {
                        e.printStackTrace();
              } catch (NamingException e) {
                   e.printStackTrace();
              return connection;
    it is giving Exception at line " connection = datasource.getConnection(); "
    Exception is:
    javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
         java.io.EOFException]
         at weblogic.jrmp.Context.lookup(Context.java:189)
         at weblogic.jrmp.Context.lookup(Context.java:195)
         at javax.naming.InitialContext.lookup(Unknown Source)
         at com.code.sample.connectionDB.JDBCConnectionPool.getConnection(JDBCConnectionPool.java:35)
         at com.code.sample.connectionDB.JDBCConnectionPool.main(JDBCConnectionPool.java:52)
    Caused by: java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
         java.io.EOFException
         at sun.rmi.transport.tcp.TCPChannel.createConnection(Unknown Source)
         at sun.rmi.transport.tcp.TCPChannel.newConnection(Unknown Source)
         at sun.rmi.server.UnicastRef.newCall(Unknown Source)
         at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
         at weblogic.jrmp.Context.lookup(Context.java:185)
         ... 4 more
    Caused by: java.io.EOFException
         at java.io.DataInputStream.readByte(Unknown Source)
         ... 9 more
    Please Advice.... Thanks

    I removed the "/" and tried..
    But the exception is same:
    javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
         java.io.EOFException]
         at weblogic.jrmp.Context.lookup(Context.java:189)
         at weblogic.jrmp.Context.lookup(Context.java:195)
         at javax.naming.InitialContext.lookup(Unknown Source)
         at com.code.sample.connectionDB.JDBCConnectionPool.getConnection(JDBCConnectionPool.java:35)
         at com.code.sample.connectionDB.JDBCConnectionPool.main(JDBCConnectionPool.java:52)
    Caused by: java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
         java.io.EOFException
         at sun.rmi.transport.tcp.TCPChannel.createConnection(Unknown Source)
         at sun.rmi.transport.tcp.TCPChannel.newConnection(Unknown Source)
         at sun.rmi.server.UnicastRef.newCall(Unknown Source)
         at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
         at weblogic.jrmp.Context.lookup(Context.java:185)
         ... 4 more
    Caused by: java.io.EOFException
         at java.io.DataInputStream.readByte(Unknown Source)
         ... 9 more

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Steps to UTF-8 Encoding with Oracle 8i and Weblogic 6.1SP1

    What are the Steps to UTF-8 Encoding with Oracle 8i and Weblogic
              6.1SP1?
              I have:
              - Oracle 8.1.5 database created with character set=UTF8 and national
              character set=UTF8
              - Weblogic 6.1SP1 without any encoding mechanism set
              (though I did play with
              <jsp-param><param-name>encoding</param-name>
              <param-value>UTF-8</param-value>
              </jsp-param>
              in the weblogic.xml for a while though it seemed not to make a
              difference)
              - JSP pages set to content='text/html; charset=UTF-8'
              - JSP form POSTs set to enctype="UTF-8"
              I can copy and paste Chinese Kanji from a UTF8 encoded web page into
              form text boxes but when I post the data it comes back as different
              Kanji. Then once it is posted the Kanji stays the same on repeated
              posts. The same Kanji text also looks different when viewed in a form
              text box than when viewed as straight text on the page.
              Is there anything else? Or am I already encoding characters twice?
              Please help!
              Mel Christie
              

    Hi Experts,
    Please correct me if am asking you the question in wrong way.
    I have ARCGIS with oracle database 10gr2 in production server.
    My work is to connect AUTOCAD S/W (client computer which is connected in LAN) to ARCGIS in order to access the toposheets available in SDE user.
    When iam trying to connect iam getting this error:The specified credentials are not valid or provider is not able to establish a connection.
    I checked the path to production server by pinging and user/passcode too but not helpful.
    Please help me in this , very urgent.
    Thanks.
    Edited by: user13355644 on Jul 3, 2010 3:53 AM
    Edited by: user13355644 on Jul 22, 2011 2:55 AM

  • How to connect pocket pc with oracle 9.2.0.6 databse

    hi,
    i have oracle 9 databse and erp which works with it.
    i need to create application for pocket pc which must work with oracle 9 database.
    how can i connect pda device with oracle databse?
    how to use oracle databse lite to do this?
    best regards

    I know there was a problem in 6i where you would get a crash if your query returned more than {Max Length} characters of the field representing the CLOB column.

  • Is it possible to connect Sqlfire server with oracle using DBSynchronizer using default license

    is it possible to connect Sqlfire server with oracle using DBSynchronizer using default license
    Sql fire is my machine
    Oracle is another machine
    When i connect sqlfire with oracle using DBsynchronizer It shows the error message as Wan is not supported for default license
    When i connect Sqlfie(my system) with oracle( another system) using JDBC Rowloader .nothing is happening after some time it shows the error like"time out for pool connection to archive database.When i increase the time out also nothing is happening.
    what is the reason for above two errors(I stated in the pragraph.)
    is it beacuse of wan netwrok not supported by default evaluation license?
    could you please explain?

    I don't think this has anything to do with a VMware product.

  • How to start with ORACLE APPS and ORACLE APPLICATION SERVER?

    Hi !!
    I am a little known with oracle database. But recently i have been asked to update my skills with oracle apps and oracl e application server. I do not have any prior experience with these products of oracle and I really have 0 knowledge of it.
    Can anybody help me find a start with oracle apps and oracle as?
    Thanks.

    Welcome
    http://www.oracle.com/technology/documentation/applications.html
    You can download from here.
    Regards
    Asif Kabir
    -- If helpful mark the post as correct/helpful, also close the thread as answered.

  • How to connect google maps with oracle

    Hello people ,
    i read an artical there is a way to connect google maps with oracle .
    can anyone give us lesson or more information to do that ?
    regards .

    I guess, using Google for the same should be the best way to find the solution for it,
    http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=How+to+connect+google+maps+with+oracle
    HTH
    Aman....

  • HT1977 I downloaded the itunes and I update it and also I downloaded many apps but when I am connecting my iphone with my pc and want to put the downloaded apps to my iphone and when I am sync them on step 3 the sync process is hidding, plz help me..

    Dear All, please help me.
    I downloaded the itunes and I update it and also I downloaded many apps but when I am connecting my iphone with my pc and want to put the downloaded apps to my iphone and when I am going to sync them on step 3 the sync process is hidding, I don't know what is the probleme please advice me/help me in this
    my iPhone version is 6.0
    regard...?
    Thanks,
    Abdullah Misrat

    Connect the device to the computer.
    Open iTunes.
    Select the content desired to sync.
    Sync.

  • JSR168 Portlet Exception with Oracle OC4J and Spring

    JSR168 Portlet Exception with Oracle OC4J and Spring
    I’m having a problem with accessing a Spring JSR168 Portlet when deployed on Oracle OC4J. I have created a very simple portlet that just renders simple jsp page. I created two versions. One using GenricPortlet and one using org.springframework.web.portlet.mvc.AbstractContro ller. The non-Spring portlet deploys and runs as expected. The Spring portlet deploys but throws the following exception
    Code:
    07/11/05 08:23:44 [ERROR] DispatcherPortlet - Could not complete request <javax.portlet.PortletException>javax.portlet.PortletExcept
    ion
    at oracle.portlet.server.containerimpl.RequestDispatcherImpl.include(RequestDispatcherImpl.java:74)
    at org.springframework.web.portlet.DispatcherPortlet.render(DispatcherPortlet.java:1077)
    at org.springframework.web.portlet.DispatcherPortlet.doRenderService(DispatcherPortlet.java:809)
    at org.springframework.web.portlet.FrameworkPortlet.processRequest(FrameworkPortlet.java:475)
    at org.springframework.web.portlet.FrameworkPortlet.doDispatch(FrameworkPortlet.java:445)
    at javax.portlet.GenericPortlet.render(GenericPortlet.java:163)
    at oracle.portlet.server.containerimpl.ServerImpl.getMarkup(ServerImpl.java:161)
    at oracle.portlet.wsrp.v1.WSRPv1ToServer.getMarkup(WSRPv1ToServer.java:4512)
    at oracle.portlet.wsrp.v1.WSRP_v1_Markup_PortTypeSoapToJaxb.getMarkup(WSRP_v1_Markup_PortTypeSoapToJaxb.java:68)
    at oasis.names.tc.wsrp.v1.bind.runtime.WSRP_v1_Markup_Binding_SOAP_Tie.invoke_getMarkup(WSRP_v1_Markup_Binding_SOAP_Tie.java
    :60)
    at oasis.names.tc.wsrp.v1.bind.runtime.WSRP_v1_Markup_Binding_SOAP_Tie.processingHook(WSRP_v1_Markup_Binding_SOAP_Tie.java:7
    79)
    at oracle.j2ee.ws.server.StreamingHandler.handle(StreamingHandler.java:297)
    at oracle.j2ee.ws.server.JAXRPCProcessor.doEndpointProcessing(JAXRPCProcessor.java:413)
    at oracle.j2ee.ws.server.WebServiceProcessor.invokeEndpointImplementation(WebServiceProcessor.java:349)
    at oracle.j2ee.ws.server.JAXRPCProcessor.doRequestProcessing(JAXRPCProcessor.java:277)
    at oracle.j2ee.ws.server.WebServiceProcessor.processRequest(WebServiceProcessor.java:114)
    at oracle.j2ee.ws.server.JAXRPCProcessor.doService(JAXRPCProcessor.java:134)
    at oracle.j2ee.ws.server.WebServiceServlet.doPost(WebServiceServlet.java:177)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:65)
    at oracle.portlet.server.service.ContextFilter.doFilter(ContextFilter.java:86)
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:623)
    at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:370)
    at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:871)
    at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:453)
    at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:221)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:122)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:111)
    at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
    at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
    at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
    at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: javax.servlet.ServletException: Error in servlet
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:759)
    at com.evermind.server.http.ServletRequestDispatcher.unprivileged_include(ServletRequestDispatcher.java:160)
    at com.evermind.server.http.ServletRequestDispatcher.access$000(ServletRequestDispatcher.java:51)
    at com.evermind.server.http.ServletRequestDispatcher$1.oc4jRun(ServletRequestDispatcher.java:97)
    at oracle.oc4j.security.OC4JSecurity.doPrivileged(OC4JSecurity.java:283)
    at com.evermind.server.http.ServletRequestDispatcher.include(ServletRequestDispatcher.java:102)
    at oracle.portlet.server.containerimpl.RequestDispatcherImpl.include(RequestDispatcherImpl.java:65)
    ... 34 more
    Nested Exception is javax.servlet.ServletException: Error in servlet
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:759)
    at com.evermind.server.http.ServletRequestDispatcher.unprivileged_include(ServletRequestDispatcher.java:160)
    at com.evermind.server.http.ServletRequestDispatcher.access$000(ServletRequestDispatcher.java:51)
    at com.evermind.server.http.ServletRequestDispatcher$1.oc4jRun(ServletRequestDispatcher.java:97)
    at oracle.oc4j.security.OC4JSecurity.doPrivileged(OC4JSecurity.java:283)
    at com.evermind.server.http.ServletRequestDispatcher.include(ServletRequestDispatcher.java:102)
    at oracle.portlet.server.containerimpl.RequestDispatcherImpl.include(RequestDispatcherImpl.java:65)
    at org.springframework.web.portlet.DispatcherPortlet.render(DispatcherPortlet.java:1077)
    at org.springframework.web.portlet.DispatcherPortlet.doRenderService(DispatcherPortlet.java:809)
    at org.springframework.web.portlet.FrameworkPortlet.processRequest(FrameworkPortlet.java:475)
    at org.springframework.web.portlet.FrameworkPortlet.doDispatch(FrameworkPortlet.java:445)
    at javax.portlet.GenericPortlet.render(GenericPortlet.java:163)
    at oracle.portlet.server.containerimpl.ServerImpl.getMarkup(ServerImpl.java:161)
    at oracle.portlet.wsrp.v1.WSRPv1ToServer.getMarkup(WSRPv1ToServer.java:4512)
    at oracle.portlet.wsrp.v1.WSRP_v1_Markup_PortTypeSoapToJaxb.getMarkup(WSRP_v1_Markup_PortTypeSoapToJaxb.java:68)
    at oasis.names.tc.wsrp.v1.bind.runtime.WSRP_v1_Markup_Binding_SOAP_Tie.invoke_getMarkup(WSRP_v1_Markup_Binding_SOAP_Tie.java
    :60)
    at oasis.names.tc.wsrp.v1.bind.runtime.WSRP_v1_Markup_Binding_SOAP_Tie.processingHook(WSRP_v1_Markup_Binding_SOAP_Tie.java:7
    79)
    at oracle.j2ee.ws.server.StreamingHandler.handle(StreamingHandler.java:297)
    at oracle.j2ee.ws.server.JAXRPCProcessor.doEndpointProcessing(JAXRPCProcessor.java:413)
    at oracle.j2ee.ws.server.WebServiceProcessor.invokeEndpointImplementation(WebServiceProcessor.java:349)
    at oracle.j2ee.ws.server.JAXRPCProcessor.doRequestProcessing(JAXRPCProcessor.java:277)
    at oracle.j2ee.ws.server.WebServiceProcessor.processRequest(WebServiceProcessor.java:114)
    at oracle.j2ee.ws.server.JAXRPCProcessor.doService(JAXRPCProcessor.java:134)
    at oracle.j2ee.ws.server.WebServiceServlet.doPost(WebServiceServlet.java:177)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:65)
    at oracle.portlet.server.service.ContextFilter.doFilter(ContextFilter.java:86)
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:623)
    at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:370)
    at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:871)
    at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:453)
    at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:221)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:122)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:111)
    at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
    at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
    at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
    at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    at java.lang.Thread.run(Thread.java:619)
    2007-11-05 08:23:44.085 WARNING An internal error has occurred in method getMarkup()
    I have managed to deploy and run the Spring portlet using Sun App Server and Sun Portlet container with no problems which leads me to believe my Spring setup is correct.
    I also have managed to run the Spring pets portlet example on Sun App Server and Sun Portlet container with no problems. But again it fails on Oracle with the same problem.
    To deploy on Oracle I ran the oracle wsrp jar against my EAR. As far as I can tell it has not corrupted any of the Spring setup in the web.xml.
    The version are as follows
    Oracle Portal 10.1.4 running on OAS 10.1.2
    calling the portlet using WSRP v1.
    The portlet is running on standalone OC4J 10.1.3.3 with Oracle portlet container 10.1.3.2. On Windows Spring 2.0.6.
    Any ideas? Thanks Paul

    Was an answer ever found? I am currently experiencing the same issue.
    Edited by: user10567841 on Nov 8, 2008 12:21 PM

  • Informatica 9.1 with oracle 12c. Not able to connect to the PDB while instalation

    Dear Peers,
    I installed oracle 12c on my laptop. And by default PDB created, TNS entries added for the same and successfully connected to PDB. In my PDB i created two users and I am able to connect with them also.
    While installing the informatica server with 64 bit, in the part of domain configuration repository I am not able to connect to my users those I created. I am not able to connect to either CDB or PDB also. I am not able to get the what is the error is. But in oracle I am able to connect to my PDB with those users.
    Any one overcome on this issues, Please help me.
    Thanks
    Raghavendra
    <mod. action: phone number removed>
    Message was edited by: Nicolas.Gasparotto

    If you can connect using sql*plus from the same machine you are using informatica to connect from then you have an informatica issue and should try posting in an informatica forum.

  • Oracle raise ORA-03113 when connect to a remote oracle server using toad

    Hi there,
    when i use the tool toad connect to a remote oracle server which located in a different city,
    when i submit a query in toad,
    if the query returns many rows of data, it will raise the error ORA-03113:end-of-file on communication channel,
    however if the query returns only a few rows, i won't raise such error,
    however, when i use sqlplus connect to that remote server, it won't raise such error,
    what's the reason is, can any one tell me how to tackle this problem if using the tool toad. thanks/

    hi my oracle vsersion is:
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    my oracle server is 2 nodes rac server,
    and i've tried two different kind version of toad v9.6 and v10.5, and both have the same problem
    once i query a table, if the result returned more than 30 rows, then it will raise that error, if query returns less than 30 rows, it's ok.
    i assume it is a problem concern with network, but i don't know why no such error raise when using sqlplus ?

  • ODAC with Oracle 10g and Visual Studio 2008

    Hello-
    Is the an ODAC version supporting Oracle 10g with tools for Visual Studio 2008? I've installed the ODAC 11.2 version for VS 2008, however the Oracle data source connection does not display for the Data Entity Model, only the usual SQL Server connections.
    Thank you in advance!

    Hello again-
    First off, thank you for prompt reply. After poring over some of the postings, I'm gaining an understanding of when ODAC and the Entity Designer marriage came together and your answer solidified it.
    My situation is I've developed an ODATA service with VS2010, Oracle 11g and ODTwithODAC112030 and everything works fine. I then deploy the service to the company's Windows 2003 server using Oracle 10g. I've also went ahead an installed the ODTwithODAC112030 on the server and copied the tnsnames.ora file into the oracle client's directory associated with the ODAC installation. When I query the Oracle database's metadata in IE, e.g. http://localhost:8050/DataMgmt/$metadata, a listing is displayed as expected; however, when I query a particlar database table listed in the metadata, an error is returned that the request could not be processed. Any ideas?
    Thank you.

  • Oracle 11g compatibility with oracle 10g and 9i?

    Hi All
    I have some queries on 11g compatibility.
    Is oracle 11g client compatible with oracle 10g client which is already installed on desktop?
    If yes any changes to be done and where?
    Is oracle 11g compatible with oracle 10g/9i on the same server where 10g/9i are installed?
    Regards

    Thanks Justin Thats right ...
    Problem elaboration is as follows
    We currently have an application which requires oracle 10g(10.2.0.3) client to connect to database from user desktops. As
    part of new application development the oracle 11g clients need to be installed on all the user deskotps.
    The complete application software along with oracle client 11g has to be roled out on these desktop as part of
    implementaion of new application besides the existing application(The application software is packaged along with oracle
    10g client).
    We would like to understand if there is any software provided by oracle using which we can switch between different oracle
    versions while accessing the respective applications simultaneously. Also are there any known bugs/issues in running
    oracle 10g client and oracle 11g client together on the same user desktops with different oracle homes?

Maybe you are looking for