An Oracle RDBMS In-Memory Mirror

Hi,
We're considering BDB to be our cache for many mostly-read database tables. Our requirements is that we don't want to drop those tables because they have many uses in terms of statistical analysis and other data mining needs. Now in order to achieve both goals, we want to leave the database as is while having BDB instance on top of it to provide the quick access part.
My question is: can I use BDB as my database and let it synchronize with the underlying relational database instead of the regular disk persistence?

Thanks.
So whenever a disk operation is involved, whether this was because you run within a transaction, doing a sync operation, performing a restore, running a recovery or any sort of integrity assertion to the in-memory instance, you have to step in, capture diff data with its current format, convert it to schema changes and commit it to the relational database. This doesn't seem to me like a trivial process, does it?

Similar Messages

  • Re-Host Effort - Access DB to an Oracle RDBMS platform

    I am developing a strategy to migrate a stand-alone Access database to a web-based Oracle RDBMS platform. I was looking to see if a standard methodology (or at least a set of common milestones) existed to accomplish the task? I'm aware that the given effort will involve some customization, but I wanted to save some time in planning.

    Thanks.
    So whenever a disk operation is involved, whether this was because you run within a transaction, doing a sync operation, performing a restore, running a recovery or any sort of integrity assertion to the in-memory instance, you have to step in, capture diff data with its current format, convert it to schema changes and commit it to the relational database. This doesn't seem to me like a trivial process, does it?

  • How is SWAP space and Oracle's Shared Memory related ?

    Platform: RHEL 5.4
    Oracle Version: 11.2
    I was trying to increase MEMORY_TARGET to 15g. Then I encountered the following error
    SQL> alter system set memory_max_target=20g scope=spfile;
    System altered.
    SQL> alter system set memory_target=15g scope=spfile;
    System altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    SQL>
    SQL>
    SQL> startup
    ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
    ORA-00845: MEMORY_TARGET not supported on this system
    SQL>
    SQL>
    SQL> select name from v$database;
    select name from v$database
    ERROR at line 1:
    ORA-01034: ORACLE not available
    Process ID: 0
    Session ID: 189 Serial number: 9From the below post
    MEMORY_TARGET not supported on this system
    I gathered that In Linux, if you want to set MEMORY_TARGET, MEMORY_MAX_TARGET to nGB , then you should have a SWAP ( /dev/shm ) of nGB.
    My Swap was only 16gb and I was trying to set memory_max_target to 20g
    $ df -h /dev/shm
    Filesystem            Size  Used Avail Use% Mounted on
    tmpfs                  16G  7.2G  8.6G  46% /dev/shmNow, I am wondering how is Oracle's Shared Memory (SGA+PGA) related to SWAP space in a server ? Shouldn't Oracle's Shared Memory be related to Physical RAM rather than disk based SWAP space ?

    related question:
    In the above mentioned OTN article it says ,
    You could encounter ORA-00845 if your shared memory is not mapped to /dev/shm
    I think he meant
    You could encounter ORA-00845 if your SWAP space is not mapped to /dev/shm .
    Am I right ?

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Can I deploy ADF outside IAS & Oracle RDBMS ?

    Hi all,
    We are exploring JDev 10g to be our standard J2EE IDE.
    But I am wondering whether ADF can be deployed outside IAS (e.g : Tomcat, JBoss) ? for production ?
    Also what If ADF on SQL Server 2000 as backends ?
    We need to make sure about this because we are not dedicated to Oracle RDBMS and IAS. We just look for an best J2EE IDE to speed up development.
    Thank you for any help,
    Krist

    Yes both are possible.
    In the JDeveloper tools menu the ADF Runtime installer has options to be installed on JBoss and Tomcat and Weblogic with a single click.
    We also know of people using other application servers.
    ADF is also database independent - for example in the how-to section of JDeveloper on OTN you'll find instructions for using ADF BC with MySQL.
    Toplink (another ADF persistence option) can also work with multiple databases.

  • Oracle-rdbms-server-11gR2-preinstall withOUT Unbreakable ?

    In the Oracle Linux 5 (RHEL 5) world, we used to install an “oracle-validated” RPM to prep a box for the database software, as well as the middleware. It sets some tuneables, loads some dependencies and other good housekeeping.
    In the Linux 6 world, it appears that we now are supposed to use the “oracle-rdbms-server-11gR2-preinstall” RPM, but this also installs the Unbreakable kernel. Is there any such version of this RPM that will keep the RH Compatible kernel?
    see https://blogs.oracle.com/linux/entry/oracle_rdbms_server_11gr2_pre for other details.
    Thanks!
    -PH

    I think I just solved this issue... I was blindly installing the repo's per the link (http://public-yum.oracle.com/) - when I edited the
    /etc/yum.repos.d/public-yum-ol6.repo file, and turned off ol6_UEK_latest (by setting enabled=0 for that stanza), I did yum clean all, and let it rip - it looks like I stayed with the RedHat Compatable version.. sorry for the long post, but see the bold in the beginning and end of the following...
    *[root@Steve-Test-rh6 yum.repos.d]# yum clean all*
    Loaded plugins: refresh-packagekit, rhnplugin
    Cleaning repos: ol6_latest rhel-x86_64-server-6
    Cleaning up Everything
    *[root@Steve-Test-rh6 yum.repos.d]# yum install oracle-rdbms-server-11gR2-preinstall*
    Loaded plugins: refresh-packagekit, rhnplugin
    ol6_latest | 1.4 kB 00:00
    ol6_latest/primary | 22 MB 00:23
    ol6_latest 17992/17992
    rhel-x86_64-server-6 | 1.8 kB 00:00
    rhel-x86_64-server-6/primary | 11 MB 00:03
    rhel-x86_64-server-6 8588/8588
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package oracle-rdbms-server-11gR2-preinstall.x86_64 0:1.0-6.el6 will be installed
    --> Processing Dependency: kernel-uek for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: gcc-c++ for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: gcc for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: libaio-devel for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: libstdc++-devel for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: glibc-devel for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: compat-libstdc++-33 for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: ksh for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Processing Dependency: compat-libcap1 for package: oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64
    --> Running transaction check
    ---> Package compat-libcap1.x86_64 0:1.10-1 will be installed
    ---> Package compat-libstdc++-33.x86_64 0:3.2.3-69.el6 will be installed
    ---> Package gcc.x86_64 0:4.4.6-4.el6 will be installed
    --> Processing Dependency: cpp = 4.4.6-4.el6 for package: gcc-4.4.6-4.el6.x86_64
    --> Processing Dependency: cloog-ppl >= 0.15 for package: gcc-4.4.6-4.el6.x86_64
    ---> Package gcc-c++.x86_64 0:4.4.6-4.el6 will be installed
    --> Processing Dependency: libmpfr.so.1()(64bit) for package: gcc-c++-4.4.6-4.el6.x86_64
    ---> Package glibc-devel.x86_64 0:2.12-1.80.el6_3.5 will be installed
    --> Processing Dependency: glibc-headers = 2.12-1.80.el6_3.5 for package: glibc-devel-2.12-1.80.el6_3.5.x86_64
    --> Processing Dependency: glibc-headers for package: glibc-devel-2.12-1.80.el6_3.5.x86_64
    ---> Package kernel-uek.x86_64 0:2.6.32-300.32.3.el6uek will be installed
    --> Processing Dependency: kernel-uek-firmware = 2.6.32-300.32.3.el6uek for package: kernel-uek-2.6.32-300.32.3.el6uek.x86_64
    ---> Package ksh.x86_64 0:20100621-16.el6 will be installed
    ---> Package libaio-devel.x86_64 0:0.3.107-10.el6 will be installed
    ---> Package libstdc++-devel.x86_64 0:4.4.6-4.el6 will be installed
    --> Running transaction check
    ---> Package cloog-ppl.x86_64 0:0.15.7-1.2.el6 will be installed
    --> Processing Dependency: libppl_c.so.2()(64bit) for package: cloog-ppl-0.15.7-1.2.el6.x86_64
    --> Processing Dependency: libppl.so.7()(64bit) for package: cloog-ppl-0.15.7-1.2.el6.x86_64
    ---> Package cpp.x86_64 0:4.4.6-4.el6 will be installed
    ---> Package glibc-headers.x86_64 0:2.12-1.80.el6_3.5 will be installed
    --> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers-2.12-1.80.el6_3.5.x86_64
    --> Processing Dependency: kernel-headers for package: glibc-headers-2.12-1.80.el6_3.5.x86_64
    ---> Package kernel-uek-firmware.noarch 0:2.6.32-300.32.3.el6uek will be installed
    ---> Package mpfr.x86_64 0:2.4.1-6.el6 will be installed
    --> Running transaction check
    ---> Package kernel-uek-headers.x86_64 0:2.6.32-300.32.3.el6uek will be installed
    ---> Package ppl.x86_64 0:0.10.2-11.el6 will be installed
    --> Finished Dependency Resolution
    Dependencies Resolved
    ================================================================================
    Package Arch Version Repository Size
    ================================================================================
    Installing:
    oracle-rdbms-server-11gR2-preinstall
    x86_64 1.0-6.el6 ol6_latest 15 k
    Installing for dependencies:
    cloog-ppl x86_64 0.15.7-1.2.el6 ol6_latest 93 k
    compat-libcap1 x86_64 1.10-1 ol6_latest 17 k
    compat-libstdc++-33 x86_64 3.2.3-69.el6 ol6_latest 183 k
    cpp x86_64 4.4.6-4.el6 ol6_latest 3.7 M
    gcc x86_64 4.4.6-4.el6 ol6_latest 10 M
    gcc-c++ x86_64 4.4.6-4.el6 ol6_latest 4.7 M
    glibc-devel x86_64 2.12-1.80.el6_3.5 ol6_latest 970 k
    glibc-headers x86_64 2.12-1.80.el6_3.5 ol6_latest 600 k
    kernel-uek                      x86_64 2.6.32-300.32.3.el6uek ol6_latest  21 M
    kernel-uek-firmware             noarch 2.6.32-300.32.3.el6uek ol6_latest 3.0 M
    kernel-uek-headers              x86_64 2.6.32-300.32.3.el6uek ol6_latest 714 k
    ksh x86_64 20100621-16.el6 ol6_latest 684 k
    libaio-devel x86_64 0.3.107-10.el6 ol6_latest 13 k
    libstdc++-devel x86_64 4.4.6-4.el6 ol6_latest 1.5 M
    mpfr x86_64 2.4.1-6.el6 ol6_latest 156 k
    ppl x86_64 0.10.2-11.el6 ol6_latest 1.3 M
    Transaction Summary
    ================================================================================
    Install 17 Package(s)
    Total download size: 49 M
    Installed size: 147 M
    Is this ok [y/N]: y
    Downloading Packages:
    (1/17): cloog-ppl-0.15.7-1.2.el6.x86_64.rpm | 93 kB 00:00
    (2/17): compat-libcap1-1.10-1.x86_64.rpm | 17 kB 00:00
    (3/17): compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm | 183 kB 00:00
    (4/17): cpp-4.4.6-4.el6.x86_64.rpm | 3.7 MB 00:03
    (5/17): gcc-4.4.6-4.el6.x86_64.rpm | 10 MB 00:11
    (6/17): gcc-c++-4.4.6-4.el6.x86_64.rpm | 4.7 MB 00:05
    (7/17): glibc-devel-2.12-1.80.el6_3.5.x86_64.rpm | 970 kB 00:01
    (8/17): glibc-headers-2.12-1.80.el6_3.5.x86_64.rpm | 600 kB 00:00
    (9/17): kernel-uek-2.6.32-300.32.3.el6uek.x86_64.rpm | 21 MB 00:21
    (10/17): kernel-uek-firmware-2.6.32-300.32.3.el6uek.noar | 3.0 MB 00:03
    (11/17): kernel-uek-headers-2.6.32-300.32.3.el6uek.x86_6 | 714 kB 00:00
    (12/17): ksh-20100621-16.el6.x86_64.rpm | 684 kB 00:00
    (13/17): libaio-devel-0.3.107-10.el6.x86_64.rpm | 13 kB 00:00
    (14/17): libstdc++-devel-4.4.6-4.el6.x86_64.rpm | 1.5 MB 00:01
    (15/17): mpfr-2.4.1-6.el6.x86_64.rpm | 156 kB 00:00
    (16/17): oracle-rdbms-server-11gR2-preinstall-1.0-6.el6. | 15 kB 00:00
    (17/17): ppl-0.10.2-11.el6.x86_64.rpm | 1.3 MB 00:01
    Total 901 kB/s | 49 MB 00:55
    warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
    Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
    Importing GPG key 0xEC551F03:
    Userid: "Oracle OSS group (Open Source Software group) <[email protected]>"
    From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
    Is this ok [y/N]: y
    Running rpm_check_debug
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
    Installing : mpfr-2.4.1-6.el6.x86_64 1/17
    Installing : libstdc++-devel-4.4.6-4.el6.x86_64 2/17
    Installing : cpp-4.4.6-4.el6.x86_64 3/17
    Installing : ppl-0.10.2-11.el6.x86_64 4/17
    Installing : cloog-ppl-0.15.7-1.2.el6.x86_64 5/17
    Installing : kernel-uek-headers-2.6.32-300.32.3.el6uek.x86_64 6/17
    Installing : glibc-headers-2.12-1.80.el6_3.5.x86_64 7/17
    Installing : glibc-devel-2.12-1.80.el6_3.5.x86_64 8/17
    Installing : gcc-4.4.6-4.el6.x86_64 9/17
    Installing : gcc-c++-4.4.6-4.el6.x86_64 10/17
    Installing : compat-libstdc++-33-3.2.3-69.el6.x86_64 11/17
    Installing : libaio-devel-0.3.107-10.el6.x86_64 12/17
    Installing : kernel-uek-firmware-2.6.32-300.32.3.el6uek.noarch 13/17
    Installing : kernel-uek-2.6.32-300.32.3.el6uek.x86_64 14/17
    Installing : ksh-20100621-16.el6.x86_64 15/17
    Installing : compat-libcap1-1.10-1.x86_64 16/17
    Installing : oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64 17/17
    Verifying : glibc-devel-2.12-1.80.el6_3.5.x86_64 1/17
    Verifying : compat-libcap1-1.10-1.x86_64 2/17
    Verifying : ksh-20100621-16.el6.x86_64 3/17
    Verifying : glibc-headers-2.12-1.80.el6_3.5.x86_64 4/17
    Verifying : gcc-4.4.6-4.el6.x86_64 5/17
    Verifying : kernel-uek-firmware-2.6.32-300.32.3.el6uek.noarch 6/17
    Verifying : libaio-devel-0.3.107-10.el6.x86_64 7/17
    Verifying : oracle-rdbms-server-11gR2-preinstall-1.0-6.el6.x86_64 8/17
    Verifying : gcc-c++-4.4.6-4.el6.x86_64 9/17
    Verifying : cloog-ppl-0.15.7-1.2.el6.x86_64 10/17
    Verifying : libstdc++-devel-4.4.6-4.el6.x86_64 11/17
    Verifying : compat-libstdc++-33-3.2.3-69.el6.x86_64 12/17
    Verifying : kernel-uek-headers-2.6.32-300.32.3.el6uek.x86_64 13/17
    Verifying : mpfr-2.4.1-6.el6.x86_64 14/17
    Verifying : cpp-4.4.6-4.el6.x86_64 15/17
    Verifying : ppl-0.10.2-11.el6.x86_64 16/17
    Verifying  : kernel-uek-2.6.32-300.32.3.el6uek.x86_64                   17/17
    Installed:
    oracle-rdbms-server-11gR2-preinstall.x86_64 0:1.0-6.el6
    Dependency Installed:
    cloog-ppl.x86_64 0:0.15.7-1.2.el6
    compat-libcap1.x86_64 0:1.10-1
    compat-libstdc++-33.x86_64 0:3.2.3-69.el6
    cpp.x86_64 0:4.4.6-4.el6
    gcc.x86_64 0:4.4.6-4.el6
    gcc-c++.x86_64 0:4.4.6-4.el6
    glibc-devel.x86_64 0:2.12-1.80.el6_3.5
    glibc-headers.x86_64 0:2.12-1.80.el6_3.5
    kernel-uek.x86_64 0:2.6.32-300.32.3.el6uek
    kernel-uek-firmware.noarch 0:2.6.32-300.32.3.el6uek
    kernel-uek-headers.x86_64 0:2.6.32-300.32.3.el6uek
    ksh.x86_64 0:20100621-16.el6
    libaio-devel.x86_64 0:0.3.107-10.el6
    libstdc++-devel.x86_64 0:4.4.6-4.el6
    mpfr.x86_64 0:2.4.1-6.el6
    ppl.x86_64 0:0.10.2-11.el6
    Complete!
    *[root@Steve-Test-rh6 yum.repos.d]# cat /etc/redhat-release*
    Red Hat Enterprise Linux Server release 6.3 (Santiago)
    *[root@Steve-Test-rh6 yum.repos.d]# lsb_release -d*
    Description:    Red Hat Enterprise Linux Server release 6.3 (Santiago)
    *[root@Steve-Test-rh6 yum.repos.d]#*
    *[root@Steve-Test-rh6 yum.repos.d]# uname -r*
    *2.6.32-279.11.1.el6.x86_64*
    [root@Steve-Test-rh6 yum.repos.d]#
    Edited by: 966042 on Oct 17, 2012 1:15 PM

  • Oracle RDBMS Screen is missing.

    Dear Experts,
    Our Problem is, while Installing ERP6.0 it asks for Oracle Client instead of Oracle RDBMS.
    That Oracle RDBMS Screen is missing.
    So it doesn't create database and subsequent folder under /oracle/stage/102_64.
    We have Downloaded the file RDBMS_SAP_64.zip and unzipped as well but of no use, as document said "SAPinst extracts the Oracle RDBMS software to the staging area". However in our case while running SapInst it ask for Oracle Client.
    OS=HP-UX 11.31 PA-RISC
    Database=Oracle 10.2
    ERP6.0
    Please help to resolve the Issue.
    Warm Regards,
    Ajit
    +91 9818999536

    Hello Ajit,
    sorry i misunderstood you before.
    > Then Sapinst automatically extracted the oracle RDBMS installation files to the folder /oracle/stage/102_64/database.From there we executed ./RUNINSTLLER and successfully installed.
    Yes, that is true and also the way like it is described in the installation guide. Afaik you can set a flag in the SAPINST dialog, if you want to extract the files or not. Also the dialog for the RDBMS installation DVDs is just coming up by the database instance installation. Maybe you have select the wrong installation path in SAPINST?
    > When it prompts for Database Installation we need to extract Oracle RDBMS to the folder /oracle/stage/102_64/ and Download the file RDBMS_SAP_64.zip attached to SAP Note 819830.am i right?
    Yes, you can also do that stuff manually. You also need the zip file from sapnote #819830.
    This is described in sapnote #972263 (Inst.NW 7.0(2004s)SR2/Business Suite 2005 SR2-UNIX/Oracle).
    <D041703, 08/SEP/06>----
    Updating SAP-Specific Files in the Oracle stage area
    Use SAPinst extracts the Oracle RDBMS software to the staging area, usually /oracle/stage/102_64/database. The "SAP" folder located in /oracle/stage/102_64/database contains SAP-specific scripts as well as the response files. Before starting the Oracle software installation, you need to update this SAP folder so that the newest versions of the scripts or response files are used.
    Procedure
          1. Rename the original "SAP" folder by performing one of the following:
    mv /oracle/stage/102_64/database/SAP
          /oracle/stage/102_64/database/SAP_ORIG
    mv /oracle/stage/102_64/database/Disk1/SAP
          /oracle/stage/102_64/database/Disk1/SAP_ORIG
          2. Download the file RDBMS_SAP_32.zip (for 32-bit platforms) or RDBMS_SAP_64.zip (for 64-bit platforms) attached to SAP Note 819830 and copy it to a temporary location such as /tmp
          3. Extract the zip file by performing one of the following:
    cd /oracle/stage/102_64/database
          unzip /tmp/RDBMS_SAP.zip
    cd /oracle/stage/102_64/database/Disk1
          unzip /tmp/RDBMS_SAP.zip
    You should now see the directory "SAP" extracted with the updated
    version of SAP-specific files.
    Regards
    Stefan

  • Oracle consumes 100% memory on Solaris 10

    Hi,
    Our database (Oracle 10g, R2) is running is running on Solaris, When I use the unix command prstat -a, it shows that, 100% Memomry utilized. Below are the details.
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    12934 oracle 2573M 2559M sleep 59 0 0:00:00 0.1% oracle/1
    5914 sirsi 4912K 4664K sleep 59 0 0:00:29 0.0% prstat/1
    12937 oracle 4896K 4592K cpu3 49 0 0:00:00 0.0% prstat/1
    833 oracle 2572M 2558M sleep 59 0 0:01:05 0.0% oracle/1
    114 root 7464K 6632K sleep 59 0 0:01:20 0.0% picld/12
    829 oracle 2573M 2559M sleep 59 0 0:01:04 0.0% oracle/1
    823 oracle 2574M 2560M sleep 59 0 0:00:46 0.0% oracle/11
    811 oracle 2573M 2559M sleep 59 0 0:00:43 0.0% oracle/1
    146 root 2288K 1312K sleep 59 0 0:00:22 0.0% in.mpathd/1
    831 oracle 2576M 2562M sleep 59 0 0:00:24 0.0% oracle/1
    639 root 3664K 2392K sleep 59 0 0:00:00 0.0% snmpXdmid/2
    700 nobody 7520K 3752K sleep 59 0 0:00:00 0.0% httpd/1
    701 nobody 7520K 3752K sleep 59 0 0:00:00 0.0% httpd/1
    637 root 3080K 2048K sleep 59 0 0:00:00 0.0% dmispd/1
    472 root 5232K 2320K sleep 59 0 0:00:00 0.0% dtlogin/1
    720 root 2912K 2400K sleep 59 0 0:00:01 0.0% vold/5
    629 root 2376K 1664K sleep 59 0 0:00:00 0.0% snmpdx/1
    702 nobody 7520K 3736K sleep 59 0 0:00:00 0.0% httpd/1
    378 root 3928K 1784K sleep 59 0 0:00:00 0.0% sshd/1
    699 nobody 7520K 3704K sleep 59 0 0:00:00 0.0% httpd/1
    697 root 9384K 6520K sleep 59 0 0:00:01 0.0% snmpd/1
    695 root 7360K 5376K sleep 59 0 0:00:04 0.0% httpd/1
    375 root 12M 8088K sleep 59 0 0:00:01 0.0% fmd/15
    354 root 3728K 2040K sleep 59 0 0:00:00 0.0% syslogd/13
    415 root 2016K 1440K sleep 59 0 0:00:00 0.0% smcboot/1
    416 root 2008K 1016K sleep 59 0 0:00:00 0.0% smcboot/1
    338 root 4736K 1296K sleep 59 0 0:00:00 0.0% automountd/2
    340 root 5080K 2384K sleep 59 0 0:00:00 0.0% automountd/3
    263 daemon 2384K 1760K sleep 60 -20 0:00:00 0.0% lockd/2
    256 root 1280K 936K sleep 59 0 0:00:00 0.0% utmpd/1
    395 root 7592K 2560K sleep 59 0 0:00:02 0.0% sendmail/1
    273 root 2232K 1496K sleep 59 0 0:00:00 0.0% ttymon/1
    254 root 2072K 1224K sleep 59 0 0:00:00 0.0% sf880drd/1
    417 root 2008K 1016K sleep 59 0 0:00:00 0.0% smcboot/1
    272 root 5152K 4016K sleep 59 0 0:00:02 0.0% inetd/4
    206 root 1232K 536K sleep 59 0 0:00:00 0.0% efdaemon/1
    394 smmsp 7568K 1904K sleep 59 0 0:00:00 0.0% sendmail/1
    128 root 2904K 2056K sleep 59 0 0:00:00 0.0% devfsadm/6
    241 daemon 2640K 1528K sleep 59 0 0:00:00 0.0% rpcbind/1
    245 daemon 2672K 1992K sleep 59 0 0:00:00 0.0% statd/1
    251 root 2000K 1248K sleep 59 0 0:00:00 0.0% sac/1
    123 root 3992K 3008K sleep 59 0 0:00:07 0.0% nscd/26
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    24 oracle 48G 48G 100% 0:04:48 0.1%
    10 sirsi 1101M 35M 0.1% 0:00:32 0.0%
    37 root 148M 97M 0.2% 0:02:18 0.0%
    10 nobody 73M 36M 0.1% 0:00:00 0.0%
    1 smmsp 7568K 1904K 0.0% 0:00:00 0.0%
    4 daemon 12M 7920K 0.0% 0:00:00 0.0%
    Total: 86 processes, 260 lwps, load averages: 0.02, 0.02, 0.02
    Can anyone suggest why Oracle consumes 100% Memory? and how do we resolve this?.
    Regards,
    Sabdar Syed.

    Many Unix tools add to each dedicated server process memory the SGA size because under Unix each dedicated server is attaching the SGA shared memory segment to its process address space: so these Unix tools are not so reliable for Oracle.
    To check the Oracle memory usage, it is generally more recommendded to use the V$ views such as V$SGASTAT and V$PGASTAT.

  • Separate User for Oracle RDBMS and EM Agent...

    Good Day All,
    Has anyone here deployed the EM Agent (10.2 or 11.1) as it's own user? For example, leave the Oracle RDBMS binaries owned by "oracle" but install the EM Agent as "oagent."
    I inquired with Oracle Support and I've received the "it is possible and done in some environments" but I want to know how many folks have done it, how large of a target base are you monitoring, and what target types have you tried. We are using a mixed RHEL & OEL environment.
    Our goal is to separate the installation and support of the EM Agents from the RDBMS by team. I think we would want to have a shared Linux group... there might be some issues with the Inventory and writing to it by separate owners but we should be able to get around that.
    Thoughts?
    TIA!
    Regards,
    Rich

    Thanks for the response!
    Besides central inventory, do you know of any other issues we might have with this configuration? Have you implemented this to monitor targets such as Linux hosts, Listeners, RDBMS, WebLogic, iAS, Peoplesoft, Siebel, etc?
    Regards,
    Rich

  • Oracle TimesTen In-Memory Database Risk Matrix

    Hi,
    From the following web-site I can see two vulnerabilities listed against TimesTen --- CVE-2010-0873 and CVE-2010-0910
    http://www.oracle.com/technetwork/topics/security/cpujul2010-155308.html
    ================================================================
    Oracle TimesTen In-Memory Database Risk Matrix
    CVE#      Component      Protocol      Package and/or Privilege Required      Remote Exploit without Auth.?      CVSS VERSION 2.0 RISK (see Risk Matrix Definitions)      Last Affected Patch set (per Supported Release)      Notes
    Base Score      Access Vector      Access Complexity      Authentication      Confidentiality      Integrity      Availability
    CVE-2010-0873      Data Server      TCP      None      Yes      10.0      Network      Low      None      Complete      Complete      Complete      7.0.6.0      See Note 1
    CVE-2010-0910      Data Server      TCP      None      Yes      5.0      Network      Low      None      None      None      Partial+      7.0.6.0, 11.2.1.4.1      See Note 1
    ===========================================================================
    Please let me know if I need to take any action on my current TimesTen deployment.
    Im using TimesTen Release 11.2.1.8.4 and 7.0.5.16.0 in our customer sites.
    Request you to respond with your valuable comments.
    Regards
    Pratheej

    Hi Pratheej,
    These vulnerabilities were fixed in 11.2.1.6.1 and 7.0.6.2.0. As you are on 11.2.1.8.4 you are okay for 11.2.1 but the 7.0.5.16.0 release does contain the vulnerability. If you are concerned then you should upgrade those to 7.0.6.2.0 or later (check for the latest applicable 7.0 release in My Oracle Support).
    Chris

  • Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache

    Hi,
    What is difference in Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache.
    For 32 bit on windows OS i am not able to insert data's more than 500k rows with 150 columns (with combinations of CHAR,BINARY_DOUBLE,BINARY_FLOAT, TT_BIGINT,REAL,DECIMAL,NUMERIC etc).
    [TimesTen][TimesTen 11.2.2.2.0 ODBC Driver][TimesTen]TT0802: Database permanent space exhausted -- file "blk.c", lineno 3450, procedure "sbBlkAlloc"
    I have set Perm size as 700 mb,Temp size as 100mb
    What is the max size we can given for PermSize,TempSize,LogBufMB for 32 bit on windows OS.
    What is the max size we can given for PermSize,TempSize,LogBufMB for 64 bit on windows OS.
    What is the Max configuration of TT for 32 bit what i can set for Perm size Temp size.
    Thanks!

    They are the same product but they are licensed differently and the license limits what functionality you can use.
    TimesTen In-Memory Database is a product in its own right allows you to use TimesTen as a standalone database and also allows replication.
    IMDB Cache is an Oracle DB Enterprise Edition option (i.e. it can only be licensed as an option to an Oracle DB EE license). This includes all the functionality of TImesTen In-Memory Database but adds in cache functionality (cache groups, cache grid etc.).
    32-bit O/S are in general a poor platform to try and create an in-memory database of any significant size (32-bit O/S are very limited in memory addressing capability) and 32-bit Windows is the worst example. The hard coded limit for total datastore size on 32-bit O/S is 2 GB but in reality you probably can;'t achieve that. On Windows the largest you can get is 1.1 GB and most often less than that. If you need something more than about 0.5 Gb on Windows then you really need to use 64-bit Windows and 64-bit TimesTen. There are no hard coded upper limit to database size on 64-bit TimesTen; the limit is the amount of free physical memory (not virtual memory) in the machine. I have easily created a 12 GB database on a Win64 machine with 16 GB RAM. On 64-bit Unix machines we have live database of over 1 TB...
    Chris

  • @/vobs/oracle/rdbms/admin/catproc.sql  error message

    After setting up 9i DB manually when i ran this script all went well with few errors , i am wondering these errors are ignoreable ....
    @/vobs/oracle/rdbms/admin/catproc.sql
    Grant succeeded.
    drop package body sys.diana
    ERROR at line 1:
    ORA-04043: object DIANA does not exist
    drop package sys.diana
    ERROR at line 1:
    ORA-04043: object DIANA does not exist
    Package created.
    Package body created.
    drop package body sys.diutil
    ERROR at line 1:
    ORA-04043: object DIUTIL does not exist
    drop package sys.diutil
    ERROR at line 1:
    ORA-04043: object DIUTIL does not exist
    Package created.
    ERROR at line 1:
    ORA-04043: object PSTUBT does not exist
    Procedure created.
    Grant succeeded.
    drop procedure sys.pstub
    ERROR at line 1:
    ERROR at line 1:
    ORA-04043: object SUBPTXT2 does not exist
    Procedure created.
    drop procedure sys.subptxt
    ERROR at line 1:
    ORA-04043: object SUBPTXT does not exist
    ERROR at line 1:
    ORA-04043: object DBMS_XPLAN_TYPE_TABLE does not exist
    drop type dbms_xplan_type
    ERROR at line 1:
    ORA-04043: object DBMS_XPLAN_TYPE does not exist
    Type created.
    ERROR at line 1:
    ORA-00942: table or view does not exist
    DROP TABLE ODCI_WARNINGS$
    ERROR at line 1:
    ORA-00942: table or view does not exist
    Type created.
    Table truncated.
    drop sequence dbms_lock_id
    ERROR at line 1:
    ORA-02289: sequence does not exist
    Sequence created.
    drop table SYSTEM.AQ$_Internet_Agent_Privs
    ERROR at line 1:
    ORA-00942: table or view does not exist
    drop table SYSTEM.AQ$_Internet_Agents
    ERROR at line 1:
    ORA-00942: table or view does not exist
    Table created.
    DROP SYNONYM def$_tran
    ERROR at line 1:
    ORA-01434: private synonym to be dropped does not exist
    DROP SYNONYM def$_call
    ERROR at line 1:
    ORA-01434: private synonym to be dropped does not exist
    DROP SYNONYM def$_defaultdest
    ERROR at line 1:
    DROP TYPE SYS.ExplainMVMessage FORCE
    ERROR at line 1:
    ORA-04043: object EXPLAINMVMESSAGE does not exist
    Type created.
    drop view sys.transport_set_violations
    ERROR at line 1:
    ORA-00942: table or view does not exist
    PL/SQL procedure successfully completed.
    drop table sys.transts_error$
    ERROR at line 1:
    ORA-00942: table or view does not exist
    drop operator XMLSequence
    ERROR at line 1:
    ORA-29807: specified operator does not exist
    drop function XMLSequenceFromXMLType
    ERROR at line 1:
    ORA-04043: object XMLSEQUENCEFROMXMLTYPE does not exist
    drop function XMLSequenceFromRefCursor
    ERROR at line 1:
    drop function XMLSequenceFromRefCursor2
    ERROR at line 1:
    ORA-04043: object XMLSEQUENCEFROMREFCURSOR2 does not exist
    drop type XMLSeq_Imp_t
    ERROR at line 1:
    ORA-04043: object XMLSEQ_IMP_T does not exist
    drop type XMLSeqCur_Imp_t
    ERROR at line 1:
    ORA-04043: object KU$_IND_PART_T does not exist
    drop type ku$_ind_part_list_t force
    ERROR at line 1:
    ORA-04043: object KU$_IND_PART_LIST_T does not exist
    drop type ku$_piot_part_t force
    ERROR at line 1:
    Grant succeeded.
    Synonym created.
    Grant succeeded.
    Package altered.
    Package altered.
    PL/SQL procedure successfully completed.

    These errors are ignorable, Oracle just trying to drop the package before creating them. If this is the first time you run catproc.sql, the errors are expected.

  • (V7.2)ORACLE RDBMS에 대한 Q&A

    제품 : ORACLE SERVER
    작성날짜 : 1998-01-20
    (V7.2)ORACLE RDBMS에 대한 Q&A
    =============================
    1. Q) 여러 사용자들에게 특정 프로그램에 대한 동등한 권한을 주려고 하는데,
    GRANT 명령을 반복해서 사용하지 않고 할 수 있는 방법은 무엇입니까?
    A) 가장 쉬운 방법은 role을 만들어서 그 role에 새로운 사용자들의 그룹을 지정
    하는 방법입니다.
    아래의 예제는 scott을 owner로 하는 emp 테이블에 대한 읽기 권한을 부여하는
    예제입니다.
    SQL> CONNECT system/manager
    Connected.
    SQL> CREATE ROLE empread;
    Role created.
    SQL> CONNECT scott/tiger
    Connected.
    SQL> GRANT SELECT ON emp TO empread;
    Grant succeeded.
    SQL>CONNECT system/manager
    Connected.
    SQL> GRANT empread TO aa;
    Grant succeeded.
    위 예제는 시스템 권한을 가진 DBA가 role을 만드는 과정을 보여줍니다.
    다음과 같이 scott 유저로도 role을 생성하고, 관리하는 것이 가능합니다.
    SQL> CONNECT scott/tiger
    Connected.
    SQL> CREATE ROLE empread;
    Role created.
    SQL> GRANT SELECT ON emp TO empread;
    Grant succeeded.
    SQL> GRANT empread TO aa;
    Grant succeeded.
    2. Q) 테이블에 레코드를 insert하려고 하는데, 다음과 같은 에러가 발생합니다.
    "unable to extend table table_name in tablespace tablespace_name "
    이라는 ORA-1653 에러입니다. 무엇이 문제입니까?
    A) 이 에러는 테이블스페이스에 새로운 extent를 생성할 공간이 부족할 때 발생하
    는 에러입니다.
    이 문제를 해결하기 위해서는 해당 테이블스페이스에 새로운 데이타화일을 생성
    해야 합니다.
    다음과 같은 명령문을 사용하여 데이타화일을 추가하시기 바랍니다.
    SQL> ALTER TABLESPACE tablespace_name ADD DATAFILE datafile SIZE size;
    위 명령문을 실행시 datafile을 지정하실 때 실제 물리적인 path를 모두 명시
    해 주셔야 합니다.
    대안으로, 'ALTER DATABASE DATAFILE name AUTOEXTEND ON' 을 실행하시면,
    동적으로 영역을 할당하실 수 있습니다.
    3.Q) 데이타베이스에 트랜잭션이 증가하여 'CREATE ROLLBACK SEGMENT' 명령으로
    새로운 롤백세그먼트를 생성하였습니다. 그런데, DML문장을 실행시마다 문제가 발
    생합니다. 무엇이 문제입니까?
    A) 새로 생성한 롤백 세그먼트가 ON-LINE인지 확인해 보아야 합니다.
    SQL> CONNECT system/manager
    Connected.
    SQL> SELECT SEGMENT_NAME, STATUS FROM DBA_ROLLBACK_SEGS;
    SEGMENT_NAME STATUS
    SYSTEM ONLINE
    R01 ONLINE
    R02 ONLINE
    R03 ONLINE
    R04 OFFLINE
    OFFLINE을 ONLINE으로 만들기 위해서는 다음 명령을 사용하시기 바랍니다.
    SQL> ALTER ROLLBACK SEGMENT r04 ONLINE;
    Rollback segment altered.
    만약, r04를 계속 ONLINE으로 작업하시고자 한다면, init.ora 화일을 열어
    다음과 같이 ROLLBACK_SEGMENTS 파라미터에 r04를 추가하시기 바랍니다.
    > ROLLBACK_SEGMENTS = (r01, r02, r03, r04)
    4. Q) 새로운 유저들을 생성하였는데, 모든 오브젝트들과 temporary segments에
    대해서 고정된 테이블스페이스를 유지하려고 한다면 그 방법은 무엇입니까?
    A) 다음과 같이 'ALTER USER' 명령문을 사용하시기 바랍니다.
    SQL> ALTER USER user_name DEFAULT TABLESPACE users
    TEMPORARY TABLESPACE temp;
    현재 지정된 테이블스페이스를 보고자 한다면, DBA_USERS 뷰를 조회하시기 바
    랍니다.
    SQL> SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
    FROM DBA_USERS;
    [USERNAME] [DEFAULT_TABLESPACE] [TEMPORARY_TABLESPACE]
    SYS SYSTEM TEMP
    SYSTEM TOOLS TEMP
    WWW_DBA SYSTEM SYSTEM
    SCOTT USERS TEMP
    5. Q) 데이타베이스에 새로운 사용자를 생성하여, default 테이블스페이스와
    temporary 테이블스페이스를 지정하였는데, 데이타베이스에 로그온할 수 없다면,
    무엇이 문제입니까?
    A) 아마 ORA-1045 에러가 발생하였을 것입니다. 이 에러는 CREATE SESSION 권한
    이 해당 유저에게 없다는 메시지입니다. GRANT 명령을 사용하여 CREATE SESSION
    권한을 해당 유저에게 부여하여야 합니다. 방법은 다음과 같습니다.
    SQL> GRANT CONNECT TO newuser;
    Grant succeeded.
    6. Q) Oracle 데이타베이스에 update 명령을 실행하였는데, hanging 상태입니다.
    무엇이 문제입니까?
    A) 아마 다른 트랜잭션이 lock을 걸고 있는 레코드에 대해서 update하려고 하였을
    것입니다. 새로운 트랜잭션이 시작되기 전에 COMMIT이나 ROLLBACK을 사용하여 해
    당 트랜잭션을 종결시켜야 합니다.
    이와 같은 상황은 같은 Oracle 유저를 사용하여 각각 다른 윈도우로부터 다중
    세션을 열려고할 때나, LOCK이나 SELECT FOR UPDATE 와 같은 명령을 사용하여
    EXPLICIT lock을 사용할 때 발생합니다.
    7. Q) Alert log 화일을 살펴보니, 'Thread 1 cannot allocate new log
    sequence number'라는 메시지가 여러 번 발생한 것을 볼 수 있었습니다.
    이 문제를 어떻게 해결해야 합니까?
    A) 아마도, redo log 그룹을 사용할 수 있을 때까지 기다리고 있는 상황인 것 같
    습니다.
    'ALTER DATABASE ADD LOGFILE' 명령을 사용하여 하나 이상의 redo log 그룹을
    추가하시기 바랍니다.
    8. Q) 50명의 사용자들이 데이타베이스를 액세스하려고 하는데, Multi-Threaded
    서버옵션을 사용하지 않습니다. MTS를 사용하려면, 어떻게 해야 합니까?
    A) init.ora 화일의 PROCESSES라는 파라미터를 적당히 셋팅해야 합니다.
    이 파라미터는 데이타베이스에 동시에 접속할 수 있는 operating system 사용자
    프로세스의 최대 수를 지정합니다.
    프로세스의 수를 계산할 때, background 프로세스의 수도 더해야 합니다.
    9. Q) Parallelism degree 8을 갖는 테이블에 액세스하려고 하는데, 실패하는
    경우 무엇이 문제입니까?
    A) init.ora 화일의 파라미터 PARALLEL_MAX_SERVERS 값이 적당히 셋팅되어 있어
    야 합니다.
    10. Q) 새로운 PL/SQL 프로그램을 작성한 후, 그 프로그램들을 실행시키기 전에
    shared pool size를 늘리고 싶습니다. 방법은 무엇입니까?
    A) init.ora 화일의 파라미터 SHARED_POOL_SIZE 값을 늘려주시기 바랍니다.
    이 파라미터는 바이트 단위의 shared pool size를 지정하는 파라미터입니다.
    11. Q) 새로운 데이타화일을 추가하려고 하다가 MAXDATAFILES 값에 도달하게 되
    어 문제가 생겼습니다.
    이 파라미터를 수정할 수 있는 방법은 무엇입니까?
    A) MAXDATAFILES라는 파라미터는 init.ora 파라미터가 아닙니다.
    모든 MAX 파라미터들은 데이타베이스가 생성될 때 결정되어집니다. 데이타베이스
    를 생성할 때, 파라미터를 어떻게 셋팅했는지 보려면, 다음 명령문을 실행시켜
    보시기 바랍니다.
    SVRMGR> alter database backup controlfile to trace;
    다음은 몇 개의 데이타베이스 명령문을 포함하는 SQL 스크립트입니다.
    > CREATE CONTROLFILE REUSE DATABASE "733" NORESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 2
    MAXDATAFILES 30
    MAXINSTANCES 1
    MAXLOGHISTORY 100
    LOGFILE
    GROUP 1 '/home/orahome/data/733/redo01.log' SIZE 500K,
    GROUP 2 '/home/orahome/data/733/redo02.log' SIZE 500K,
    GROUP 3 '/home/orahome/data/733/redo03.log' SIZE 500K
    DATAFILE
    '/home/orahome/data/733/system01.dbf' SIZE 500K,
    '/home/orahome/data/733/rbs01.dbf' SIZE 500K,
    '/home/orahome/data/733/tools01.dbf' SIZE 500K,
    '/home/orahome/data/733/users01.dbf' SIZE 500K,
    '/home/orahome/data/733/test1.dbf' SIZE 500K,
    '/home/orahome/data/733/temp.dbf' SIZE 500K;
    위의 화일을 보면, MAX 파라미터 값을 알 수 있을 것이며, 이 값을 수정하려면
    데이타베이스를 재생성하거나, control 화일을 재생성하셔야 합니다. 두 가지 방
    법 중에서는 두 번째 방법을 사용하시기 바랍니다. CREATE CONTROLFILE이란 명령
    을 사용하시면, 새로운 control 화일을 만들면서 MAX 파라미터 값을 새로 지정합
    니다.
    12. Q) 데이타베이스를 아카이브로그 모드로 운용하기 위해, 데이타베이스를
    shutdown하고, mount한 후에, 데이타베이스를 아카이브로그 모드로 셋팅하였습니
    다. 그런데, 접속을 하니 데이타베이스가 hanging 상태가 되었습니다.
    이러한 일이 발생한 원인은 무엇이며, 해결방법은 무엇입니까?
    A) 데이타베이스를 아카이브로그 모드로 운용하기 위해서는 두 가지 작업이 필요
    합니다.
    SVRMGR> alter database archivelog;
    SVRMGR> log archive start;
    아마 첫 번째 작업은 하셨을 것입니다. 그런데, 두 번째 작업을 해주시지 않
    았기 때문에 hanging 상태가 된 것입니다. Oracle은 automatic archiving이 되
    지 않고, 사용자가 manual archiving을 하기를 기다리고 있는 것입니다.
    Archiving을 해 주지 않으면, 새로운 redo log 화일을 생성할 수가 없습니다.
    데이타베이스를 열 때마다 이 작업을 해 주셔야하기 때문에, init.ora 화일의
    LOG_ARCHIVE_START라는 파라미터를 true로 하시면 automatic archiving이 설정
    됩니다.
    13. Q)'ALTER DATABASE CREATE DATAFILE'이란 명령을 사용할 때, 주의할 사항은
    무엇입니까?
    A) 새로운 데이타화일을 생성한 후에, 그 데이타화일을 control 화일에 포함시켜
    주어야 합니다.
    만약, 백업 control 화일을 사용 중이라면, 새로운 데이타화일을 추가한 후에
    control 화일이 백업되어야 한다는 것입니다.

    제품 : ORACLE SERVER
    작성날짜 : 1998-01-20
    (V7.2)ORACLE RDBMS에 대한 Q&A
    =============================
    1. Q) 여러 사용자들에게 특정 프로그램에 대한 동등한 권한을 주려고 하는데,
    GRANT 명령을 반복해서 사용하지 않고 할 수 있는 방법은 무엇입니까?
    A) 가장 쉬운 방법은 role을 만들어서 그 role에 새로운 사용자들의 그룹을 지정
    하는 방법입니다.
    아래의 예제는 scott을 owner로 하는 emp 테이블에 대한 읽기 권한을 부여하는
    예제입니다.
    SQL> CONNECT system/manager
    Connected.
    SQL> CREATE ROLE empread;
    Role created.
    SQL> CONNECT scott/tiger
    Connected.
    SQL> GRANT SELECT ON emp TO empread;
    Grant succeeded.
    SQL>CONNECT system/manager
    Connected.
    SQL> GRANT empread TO aa;
    Grant succeeded.
    위 예제는 시스템 권한을 가진 DBA가 role을 만드는 과정을 보여줍니다.
    다음과 같이 scott 유저로도 role을 생성하고, 관리하는 것이 가능합니다.
    SQL> CONNECT scott/tiger
    Connected.
    SQL> CREATE ROLE empread;
    Role created.
    SQL> GRANT SELECT ON emp TO empread;
    Grant succeeded.
    SQL> GRANT empread TO aa;
    Grant succeeded.
    2. Q) 테이블에 레코드를 insert하려고 하는데, 다음과 같은 에러가 발생합니다.
    "unable to extend table table_name in tablespace tablespace_name "
    이라는 ORA-1653 에러입니다. 무엇이 문제입니까?
    A) 이 에러는 테이블스페이스에 새로운 extent를 생성할 공간이 부족할 때 발생하
    는 에러입니다.
    이 문제를 해결하기 위해서는 해당 테이블스페이스에 새로운 데이타화일을 생성
    해야 합니다.
    다음과 같은 명령문을 사용하여 데이타화일을 추가하시기 바랍니다.
    SQL> ALTER TABLESPACE tablespace_name ADD DATAFILE datafile SIZE size;
    위 명령문을 실행시 datafile을 지정하실 때 실제 물리적인 path를 모두 명시
    해 주셔야 합니다.
    대안으로, 'ALTER DATABASE DATAFILE name AUTOEXTEND ON' 을 실행하시면,
    동적으로 영역을 할당하실 수 있습니다.
    3.Q) 데이타베이스에 트랜잭션이 증가하여 'CREATE ROLLBACK SEGMENT' 명령으로
    새로운 롤백세그먼트를 생성하였습니다. 그런데, DML문장을 실행시마다 문제가 발
    생합니다. 무엇이 문제입니까?
    A) 새로 생성한 롤백 세그먼트가 ON-LINE인지 확인해 보아야 합니다.
    SQL> CONNECT system/manager
    Connected.
    SQL> SELECT SEGMENT_NAME, STATUS FROM DBA_ROLLBACK_SEGS;
    SEGMENT_NAME STATUS
    SYSTEM ONLINE
    R01 ONLINE
    R02 ONLINE
    R03 ONLINE
    R04 OFFLINE
    OFFLINE을 ONLINE으로 만들기 위해서는 다음 명령을 사용하시기 바랍니다.
    SQL> ALTER ROLLBACK SEGMENT r04 ONLINE;
    Rollback segment altered.
    만약, r04를 계속 ONLINE으로 작업하시고자 한다면, init.ora 화일을 열어
    다음과 같이 ROLLBACK_SEGMENTS 파라미터에 r04를 추가하시기 바랍니다.
    > ROLLBACK_SEGMENTS = (r01, r02, r03, r04)
    4. Q) 새로운 유저들을 생성하였는데, 모든 오브젝트들과 temporary segments에
    대해서 고정된 테이블스페이스를 유지하려고 한다면 그 방법은 무엇입니까?
    A) 다음과 같이 'ALTER USER' 명령문을 사용하시기 바랍니다.
    SQL> ALTER USER user_name DEFAULT TABLESPACE users
    TEMPORARY TABLESPACE temp;
    현재 지정된 테이블스페이스를 보고자 한다면, DBA_USERS 뷰를 조회하시기 바
    랍니다.
    SQL> SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
    FROM DBA_USERS;
    [USERNAME] [DEFAULT_TABLESPACE] [TEMPORARY_TABLESPACE]
    SYS SYSTEM TEMP
    SYSTEM TOOLS TEMP
    WWW_DBA SYSTEM SYSTEM
    SCOTT USERS TEMP
    5. Q) 데이타베이스에 새로운 사용자를 생성하여, default 테이블스페이스와
    temporary 테이블스페이스를 지정하였는데, 데이타베이스에 로그온할 수 없다면,
    무엇이 문제입니까?
    A) 아마 ORA-1045 에러가 발생하였을 것입니다. 이 에러는 CREATE SESSION 권한
    이 해당 유저에게 없다는 메시지입니다. GRANT 명령을 사용하여 CREATE SESSION
    권한을 해당 유저에게 부여하여야 합니다. 방법은 다음과 같습니다.
    SQL> GRANT CONNECT TO newuser;
    Grant succeeded.
    6. Q) Oracle 데이타베이스에 update 명령을 실행하였는데, hanging 상태입니다.
    무엇이 문제입니까?
    A) 아마 다른 트랜잭션이 lock을 걸고 있는 레코드에 대해서 update하려고 하였을
    것입니다. 새로운 트랜잭션이 시작되기 전에 COMMIT이나 ROLLBACK을 사용하여 해
    당 트랜잭션을 종결시켜야 합니다.
    이와 같은 상황은 같은 Oracle 유저를 사용하여 각각 다른 윈도우로부터 다중
    세션을 열려고할 때나, LOCK이나 SELECT FOR UPDATE 와 같은 명령을 사용하여
    EXPLICIT lock을 사용할 때 발생합니다.
    7. Q) Alert log 화일을 살펴보니, 'Thread 1 cannot allocate new log
    sequence number'라는 메시지가 여러 번 발생한 것을 볼 수 있었습니다.
    이 문제를 어떻게 해결해야 합니까?
    A) 아마도, redo log 그룹을 사용할 수 있을 때까지 기다리고 있는 상황인 것 같
    습니다.
    'ALTER DATABASE ADD LOGFILE' 명령을 사용하여 하나 이상의 redo log 그룹을
    추가하시기 바랍니다.
    8. Q) 50명의 사용자들이 데이타베이스를 액세스하려고 하는데, Multi-Threaded
    서버옵션을 사용하지 않습니다. MTS를 사용하려면, 어떻게 해야 합니까?
    A) init.ora 화일의 PROCESSES라는 파라미터를 적당히 셋팅해야 합니다.
    이 파라미터는 데이타베이스에 동시에 접속할 수 있는 operating system 사용자
    프로세스의 최대 수를 지정합니다.
    프로세스의 수를 계산할 때, background 프로세스의 수도 더해야 합니다.
    9. Q) Parallelism degree 8을 갖는 테이블에 액세스하려고 하는데, 실패하는
    경우 무엇이 문제입니까?
    A) init.ora 화일의 파라미터 PARALLEL_MAX_SERVERS 값이 적당히 셋팅되어 있어
    야 합니다.
    10. Q) 새로운 PL/SQL 프로그램을 작성한 후, 그 프로그램들을 실행시키기 전에
    shared pool size를 늘리고 싶습니다. 방법은 무엇입니까?
    A) init.ora 화일의 파라미터 SHARED_POOL_SIZE 값을 늘려주시기 바랍니다.
    이 파라미터는 바이트 단위의 shared pool size를 지정하는 파라미터입니다.
    11. Q) 새로운 데이타화일을 추가하려고 하다가 MAXDATAFILES 값에 도달하게 되
    어 문제가 생겼습니다.
    이 파라미터를 수정할 수 있는 방법은 무엇입니까?
    A) MAXDATAFILES라는 파라미터는 init.ora 파라미터가 아닙니다.
    모든 MAX 파라미터들은 데이타베이스가 생성될 때 결정되어집니다. 데이타베이스
    를 생성할 때, 파라미터를 어떻게 셋팅했는지 보려면, 다음 명령문을 실행시켜
    보시기 바랍니다.
    SVRMGR> alter database backup controlfile to trace;
    다음은 몇 개의 데이타베이스 명령문을 포함하는 SQL 스크립트입니다.
    > CREATE CONTROLFILE REUSE DATABASE "733" NORESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 2
    MAXDATAFILES 30
    MAXINSTANCES 1
    MAXLOGHISTORY 100
    LOGFILE
    GROUP 1 '/home/orahome/data/733/redo01.log' SIZE 500K,
    GROUP 2 '/home/orahome/data/733/redo02.log' SIZE 500K,
    GROUP 3 '/home/orahome/data/733/redo03.log' SIZE 500K
    DATAFILE
    '/home/orahome/data/733/system01.dbf' SIZE 500K,
    '/home/orahome/data/733/rbs01.dbf' SIZE 500K,
    '/home/orahome/data/733/tools01.dbf' SIZE 500K,
    '/home/orahome/data/733/users01.dbf' SIZE 500K,
    '/home/orahome/data/733/test1.dbf' SIZE 500K,
    '/home/orahome/data/733/temp.dbf' SIZE 500K;
    위의 화일을 보면, MAX 파라미터 값을 알 수 있을 것이며, 이 값을 수정하려면
    데이타베이스를 재생성하거나, control 화일을 재생성하셔야 합니다. 두 가지 방
    법 중에서는 두 번째 방법을 사용하시기 바랍니다. CREATE CONTROLFILE이란 명령
    을 사용하시면, 새로운 control 화일을 만들면서 MAX 파라미터 값을 새로 지정합
    니다.
    12. Q) 데이타베이스를 아카이브로그 모드로 운용하기 위해, 데이타베이스를
    shutdown하고, mount한 후에, 데이타베이스를 아카이브로그 모드로 셋팅하였습니
    다. 그런데, 접속을 하니 데이타베이스가 hanging 상태가 되었습니다.
    이러한 일이 발생한 원인은 무엇이며, 해결방법은 무엇입니까?
    A) 데이타베이스를 아카이브로그 모드로 운용하기 위해서는 두 가지 작업이 필요
    합니다.
    SVRMGR> alter database archivelog;
    SVRMGR> log archive start;
    아마 첫 번째 작업은 하셨을 것입니다. 그런데, 두 번째 작업을 해주시지 않
    았기 때문에 hanging 상태가 된 것입니다. Oracle은 automatic archiving이 되
    지 않고, 사용자가 manual archiving을 하기를 기다리고 있는 것입니다.
    Archiving을 해 주지 않으면, 새로운 redo log 화일을 생성할 수가 없습니다.
    데이타베이스를 열 때마다 이 작업을 해 주셔야하기 때문에, init.ora 화일의
    LOG_ARCHIVE_START라는 파라미터를 true로 하시면 automatic archiving이 설정
    됩니다.
    13. Q)'ALTER DATABASE CREATE DATAFILE'이란 명령을 사용할 때, 주의할 사항은
    무엇입니까?
    A) 새로운 데이타화일을 생성한 후에, 그 데이타화일을 control 화일에 포함시켜
    주어야 합니다.
    만약, 백업 control 화일을 사용 중이라면, 새로운 데이타화일을 추가한 후에
    control 화일이 백업되어야 한다는 것입니다.

  • No oracle rdbms file found during PI 7.1 EHP1 installation

    Dear all,
    I am currently installating SAP EHP1 for SAP NetWeaver Process Integration 7.1.
    I have filled out all preselections screens.
    I have started the installation (enterred the key generated from SAP Solution Manager).
    When I reach step 8 - Install database server software, I get this message box :
    "no oracle rdbms file found".
    Then, eitheir if I click on "OK" or "Cancel", I get the following message box :
    "SAPinst now stops the installation" ....
    Here, I normally have to start the oracle installation.
    I go to /oracle/stage/112_64/database/SAP to start the ./RUNINSTALLER_CHECK
    /oracle/stage/112_64 exist, but /database/SAP does not exist !!!
    Any idea is very welcome
    Best regards
    SAP NetWeaverAdmin
    Edited by: SAP NetWeaverAdmin on Nov 30, 2011 12:29 PM
    Edited by: SAP NetWeaverAdmin on Nov 30, 2011 12:31 PM

    Hello dear sap nw admin,
    I've the same problem while installing systems with the latest sw provisioning manager 1.0 sp3
    Have you solved your problem by just downloading the latest installation master or have you also updated the rdbms dvd?
    Kind regards
    Roland

  • Where can i find the Oracle JDBC for Oracle RDBMS 8.1.7 ? Thanks.

    Where can i find the Oracle JDBC for Oracle RDBMS 8.1.7 ? Thanks.

    http://otn.oracle.com/software/tech/java/sqlj_jdbc/content.html

Maybe you are looking for

  • Is there any way to crop part of  a screen shot and paste into Pages?

    Instead of importing an entire photo into Pages, I am trying to: -take a screen shot of iPad documentation - open in photo roll and crop a section out of it - paste into Pages Is there an App that might crop and paste into Pages? Thanks for any help.

  • Missing Shadow of Mordor pre order certificates

    I am a gamers club unlocked member and pre ordered Shadow of Mordor.  I picked it up on release day but have yet to receive my certs.  What should I do?

  • Printout GR Slip when GR posted via BAPI_GOODSMVT_CREATE

    Dear all , We need to printout the slip when we do the Goods receipt with movement type 101 via BAPI_GOODSMVT_CREATE. We set the user parameter NDR with value X in the User parameters to bring the field always on in the MM transactions but when we po

  • Howto copy large files?

    Hi, cause of the reason that in java.io.File a copy method is missing, I used streams to copy files. But a normal java-app has only about 64MB available, so if the file is very big (about 60MB or more) this method leads to a "java.lang.OutOfMemoryErr

  • Setting preferences with script

    I want to distribute adobe reader on a terminal server and restrict the users from changing the configuration.  I need to uncheck the "open cross-documents links in same window".  I'm hoping to do this with Javascript.  Does anyone know how to do thi