Dataware house with Oracle or distributed computing

Hi All,
When I talked with some guys in some big companies on the DW (ETL especially), all of them said they love distributing computing with Hadoop or Hive much more than Oracle.
When they have huge data per day for processing (say n TB), Oracle or rational database can't work very well.
I just have one DW project experience, which was implemented with PL/SQL and Shell purely and works well, at least from my point of view.
What's your opinion on this?
Thank you very much,
Leon

Hi Leon,
look at this page (it contains link to two publications with results of comparison Hadoop against 2 RDBMS)
http://database.cs.brown.edu/projects/mapreduce-vs-dbms/
It seems Hadoop currently has no any chance against RDBMS (in DWH area)...
In my opnion, Hadoop/Hive is a technology and not a solution, to solve problem with Hadoop/Hive you will need to do a lot of work.
Regards,
Oleg

Similar Messages

  • Install Dataware House on SUSE linux 9.3 with Oracle Rel 2 (9.2.0.4)

    Dear All
    One of my customer needs dataware house on SUSE linux and i am a core DBA working on Prod and development servers upto now. So i want to know the necessary things i need to keep in mind before going to configure datawarehouse for the customer. Can you guys please suggest me the things i need to take care?
    Any URL's PDF's
    Thanks in Advance
    Ravi

    But when i don't give the oracle user all rights, it isn't possible to proceed with theinstallation
    But if you give that rights then it's a security hole. According to your words I guess you have similar enviroment settings:
    ORACLE_BASE=/
    ORACLE_HOME=/<directory_name>
    Why you not installing on deeper directory such as /opt or some your own directory? For example
    ORACLE_BASE=/myoracledir
    ORACLE_HOME=$ORACLE_BASE/<directory_name>
    Then chown -R oracle:dba /myoracledir.
    Then oracle will be owner just for /myoracle directory and all its subdirectories.
    i just could look at the error details, but they didn't described the erroranyway.
    That's not so true. Error log you could find in /tmp/OraInstallYYYY-MM-DD_HH_MI_SS..

  • TS4020 I live in a house with multiple iCloud users.  When they try to turn on "Find my computer"  they get the message that they will have to disable my "find my computer" setting in order to enable theirs.  How can they all be enabled at the same time?

    I live in a house with multiple iCloud users.  When they try to turn on "Find my computer"  they get the message that they will have to disable my "find my computer" setting in order to enable theirs.  How can they all be enabled at the same time?

    Try this support document for information on how to contact Apple and account security. Apple ID: Contacting Apple for help with Apple ID account security

  • I was told by comcast that we had a computer in the house with a malware virus, they even said that they were going to terminate our service if we did not get it fixed. Now this week we hear that there is a trojan malware virus, how do we get rid of it?

    I was told by comcast that we had a computer in the house with a malware virus, they even said that they were going to terminate our service if we did not get it fixed. Now this week we hear that there is a trojan malware virus, how do we get rid of it?

    Hello,
    Flashback - Detect and remove the uprising Mac OS X Trojan...
    http://www.mac-and-i.net/2012/04/flashback-detect-and-remove-uprising.html
    In order to avoid detection, the installer will first look for the presence of some antivirus tools and other utilities that might be present on a power user's system, which according to F-Secure include the following:
    /Library/Little Snitch
    /Developer/Applications/Xcode.app/Contents/MacOS/Xcode
    /Applications/VirusBarrier X6.app
    /Applications/iAntiVirus/iAntiVirus.app
    /Applications/avast!.app
    /Applications/ClamXav.app
    /Applications/HTTPScoop.app
    /Applications/Packet Peeper.app
    If these tools are found, then the malware deletes itself in an attempt to prevent detection by those who have the means and capability to do so. Many malware programs use this behavior, as was seen in others such as the Tsunami malware bot.
    http://reviews.cnet.com/8301-13727_7-57410096-263/how-to-remove-the-flashback-ma lware-from-os-x/
    http://x704.net/bbs/viewtopic.php?f=8&t=5844&p=70660#p70660
    Check now whether your Mac is infected by Backdoor.Flashback.39!
    http://public.dev.drweb.com/april/

  • Install Oracle 9i client on a computer with Oracle 10g Client

    Hello all!!
    I'm using a computer with Oracle 10g client, and I access two databases. One server has Oracle 10g server, and the other server has Oracle 9i server.
    I want to install 9i client, to have it also in my computer. I haver read that I just have to install it in a different ORACLE_HOME, but I have a doubt.
    If I install 9i client, would I be able to use the utilities, (exp, expddp, sqlldr) of both versions, or which version will be used in the command line for the utilities.
    Thanks a lot!!!

    To start with, Home Selector is a "9i thing" (and older), in 10g there's the OUI -> Installed Products... -> Environment tab instead.
    About the 9i dbca with other Oracle products installed. If you use launch.exe to start via dbca start menu shortcut, it uses ora92\assistants\dbca as working directory ("start from"). So when not looking for "numbered dlls" like oracore9.dll, a later/different version will be loaded if earlier in path, since loading goes through jre directories, ora92\assistants\dbca, then the dir's from path variable. From there it may start to link in libraries from different homes (on my laptop, "launch.exe dbca" uses files from 2 additional home paths!)
    But, if you instead use dbca.bat started from ora92\bin, it will look in the right directories first and there's no dll mess-up. (But only when using the correct cwd!)
    Plain and simple: stay very far away from multiple-general-home installs. It is possible to install allright, in different homes and all very nicely, but there will be pain when you decide to use the stuff (perhaps with sql*plus as the only exception, but I've run into bugs with that too).
    Message was edited by:
    orafad

  • Issue with Oracle Distributed Document Capture in table update

    Hi All,
    I installed ODDC and configured with Oracle 11gr2 for document commit.
    I have table with 4 fields i.e id,c_number,content,mime_type. I am storing the image in content which blob datatype.
    when i import and send document from WebCapture screen i am getting document send successfully. But the data is no commiting in database table.
    The pak file are genereated in "/Document Capture/Webpages/ClientAcces".
    how i have to commit this files to database?.
    am I missing something in configuration?. Please suggest.
    thanks
    nr
    Edited by: pnr on Jul 30, 2012 5:50 AM
    Edited by: pnr on 30 Jul, 2012 7:54 AM

    It's likely that Oracle Distributed Document Capture Service (ecNetService) haven't been started as it is responsible for the pak file processing and it doesn't start automatically after install.
    Regards,
    Boris
    Edited by: tombo on 2012.08.01 04:49

  • What i need to install on new computer - has program that work with Oracl

    hi
    what i need to install on new computer that has program that
    work with Oracle 11.2 ?
    the database is on the server and the new computer connect to the server.
    can i get link for download this ?
    thanks in advance

    thanks for the help !
    but so many downloads..... what to download ?
    i don't find any client 11g to download
    i need to download the full Oracle 11g for this ?
    can you send me exactly what to download ?
    thanks in advance

  • How can I connect the remote server which is installed with oracle

    Now my computer is in my home,in my computer there is installed with the oracle 9i client,the server is also installed with oracle 9i,which is in my company.Now I want to use sql*plus to connect the database of the server through the internet work.I have created one listener service name in my home computer,but can't test successfully and the sql*plus can't connect the database.Who can tell me how to do it?Thank you!

    Technical questions need to be addressed to one of the technical forums-- Products | Database | Database - General in this case.
    Unless you are VPN-ing in or the DBA's at your company have configured Oracle Connection Manager to allow you to connect through the firewall to the database, however, I doubt this will work. If they have configured Connection Manager, you'll have to ask them to provide you with connection information. 99.9% of the time, you do not want people to access a database via the internet.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Regarding OBIEE Integration With Oracle EBS

    Hi All,
    I have installed BIAPPS (7.9.6.1) (components installed are 2 11g DBs (one is OLTP and other is OLAP) on RHEL 4, Oracle Application Server 10g, Informatica Power Center , DAC and OBIEE). After that i need to integrate the OBIEE with Oracle EBS. I have installed 11.5.10.2. I upgraded the database to 10g R2 and applied patch ATG 6 (Pre reqs of Integration).
    Steps Which i have done for integration:-
    I installed Financial module in Biapps. Made changes to that pre built rpd ie:OracleBIAnalyticsApps.rpd using Administrations tool. I have created datasource for EBS and given in connection pool. Similarly in dataware house connection pool also. When i test from Administration tool means Init Block > EBS Security context > Edit datasource. When i click on test the datasource i am getting error NQ_SESSION.ICX_SESSION_COOKIE* has no value definition.*
    I copied that rpd linux box where my OBIEE server resides and changed the rpd name in NQSConfig.INI. I added the oracle_home and tnsadmin etc in user.sh file. I added the dsn names which i created the same in windows in odbc.ini file. But NQServer.log says
    *[nQSError: 43059] Init block 'LAST_SYND_DS_YTD_QTD': Dynamic refresh of repository scope variables has failed.*
    *[nQSError: 17001] Oracle Error code: 12154, message: ORA-12154: TNS:could not resolve the connect identifier specified*
    at OCI call OCIServerAttach.
    *[nQSError: 17014] Could not connect to Oracle database.*
    Please let me know any solution for this.
    Thanks,
    Manikandan

    hi Manikandan,
    Please check the tns entries for the connection pool that u assigned for LAST_SYND_DS_YTD_QTD
    thanks,
    saichand

  • -- Establishing contact with Oracle 8i & JDeveloper Teams

    Hi,
    We are a tool vendor (Quintessence Systems). We have developed a
    generalized, fully automated tool which generates 100% pure Java
    classes from stored PL/SQL (Packages, Procedures, Function etc)
    objects.
    The tool itself, written entirely in Java, parses and tokenizes
    PL/SQL objects then rapidly generates 100% pure Java. This
    provides existing Oracle customers with the ability to:
    - Continue developing in PL/SQL for as long as necessary
    transparently gain the benefits of a Java deployment of their
    business logic
    - Migrate their PL/SQL automatically to Java if required
    - Create EJBs from Stored Procedures in conjunction with
    JDeveloper and gain the benefits of distributed component based
    computing
    - Automatically deploy PL/SQL stored procedures in Java in an
    Oracle Application Server
    We would be very interested in developing close contacts within
    your group (at both technical and product manangement levels) as
    I believe this technology could become and important Internet,
    distributed computing and e-commerce enabler for your customers.
    Please advise me who you think would be useful people for us to
    contact by email at both a technical and product management
    level.
    I look forward to your reply. Please email me directly at
    [email protected]
    Thanks.
    Elton Barendse
    CEO
    Quintessence Systems
    null

    Hi Gloria,
    If you mean, "Will it work?" the answer is yes. Oracle9i JDeveloper should work with either the 8i or 9i versions of the database.
    Something in your question (perhaps it was that you mentioned you had a "free" copy of JDeveloper) makes me unsure whether this is what you were asking, however. If your question is "Is it legal?" then it depends on what you want to do with it. You're welcome to play around with JDeveloper and the database all you want; explore its features and evaluate it. However, if you want to deploy an application developed with JDeveloper in a commercial or other production setting, you do need to buy a license.
    Hope this helps,
    Avrom

  • DataWare Housing Database Optimization Parameter

    Hi,
    Does any one have any standard on Oracle optimized parameter value with small DataWare housing enviornment?
    Any suggestions are welcome.
    thanks

    As with any tuning problem, there is no "one size fits all" approach. The standard tuning methodology applies here as it does anywhere
    - Figure out how quickly something needs to run
    - If it isn't running quickly enough, figure out what is taking so much time
    - Once you know what is taking so much time, figure out how to reduce the time required. That may involve a global configuration change, it may involve tuning SQL, etc.
    In addition, specifying at least the Oracle version would be critical-- there's a world of difference between an 8.1.7 database and an 11.1 database. If you are managing SGA & PGA separately, data warehouses generally allocate a larger fraction of the RAM to PGA than their OLTP cousins. They generally make greater use of parallelism. They more commonly use compression, bitmap indexes, and partitioning.
    Justin

  • My iPad has to be really really close to the router for it to have internet. How can I make it so I can go around the house with it?

    My iPad has to be really really close to the router for it to have internet. How can I make it so I can go around the house with it? My iPad has been like this for a couple of months, and it has really irritated me. My friend came over and he had to download iTunes for something on the computer. He needed my iPad, he tried it and it wasn't really near the router.

    I'd begin by reviewing the following:
    http://support.apple.com/kb/TS1398

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Problem with Oracle 11g(32 bit) installation on windows 7 ultimate edition

    Hello all,
    I have a problem with Oracle 11g(32 bit) installation on windows 7 ultimate edition (32 bit).
    I have successfully installed it immediately after OS installation. But today, i have decided to deinstall it and go for Oracle 10g version for 32 bit.
    Everything went normal during installation, but i can see the services is not present in services.msc. Also its throwing some exception for dbca, netca
    Now i tried to deinstall it and again go for 11g. But even the same story here..
    Can anybody give me a solution for this..
    -Regards
    Rajesh Menon

    Saqib Alam wrote:
    i recently install Oracle 11g R1 on windows 7 ultimate, i installed it and working perfectly.
    ur problem is that u install latest version and now u trying to installing old version.
    now u need to uninstall 10g and delete oracle from services, if the probleme presist then u should
    install fresh windows 7.
    Regards
    SaqibNo need to install a fresh OS. That's like tearing your house down just because you wired a lamp wrong and blew a circuit breaker.
    There are MeaLink notes on how to eradicate an Oracle install from Windows, but it boils down to this:
    Stop all Oracle services
    In the registery:
    - Delete all oracle services from the register (HKLM\SYSTEM\CurrentControlSet
    - Delete the entire Oralce folder from HKLM\Software
    reboot
    Delete the ORACLE_HOME directory and any other Oracle related directories/files. Offhand, it seems like there is an Oracle directory under Program Files.
    reboot

  • Steps to UTF-8 Encoding with Oracle 8i and Weblogic 6.1SP1

    What are the Steps to UTF-8 Encoding with Oracle 8i and Weblogic
              6.1SP1?
              I have:
              - Oracle 8.1.5 database created with character set=UTF8 and national
              character set=UTF8
              - Weblogic 6.1SP1 without any encoding mechanism set
              (though I did play with
              <jsp-param><param-name>encoding</param-name>
              <param-value>UTF-8</param-value>
              </jsp-param>
              in the weblogic.xml for a while though it seemed not to make a
              difference)
              - JSP pages set to content='text/html; charset=UTF-8'
              - JSP form POSTs set to enctype="UTF-8"
              I can copy and paste Chinese Kanji from a UTF8 encoded web page into
              form text boxes but when I post the data it comes back as different
              Kanji. Then once it is posted the Kanji stays the same on repeated
              posts. The same Kanji text also looks different when viewed in a form
              text box than when viewed as straight text on the page.
              Is there anything else? Or am I already encoding characters twice?
              Please help!
              Mel Christie
              

    Hi Experts,
    Please correct me if am asking you the question in wrong way.
    I have ARCGIS with oracle database 10gr2 in production server.
    My work is to connect AUTOCAD S/W (client computer which is connected in LAN) to ARCGIS in order to access the toposheets available in SDE user.
    When iam trying to connect iam getting this error:The specified credentials are not valid or provider is not able to establish a connection.
    I checked the path to production server by pinging and user/passcode too but not helpful.
    Please help me in this , very urgent.
    Thanks.
    Edited by: user13355644 on Jul 3, 2010 3:53 AM
    Edited by: user13355644 on Jul 22, 2011 2:55 AM

Maybe you are looking for

  • Error While defining the content server

    Dear Folks, We are trying to define the content sever in our sever. It is required for integration of SAP with documentum. We are trying to define in below path. Cross-Application Components - Document Management - General data - Settings for storage

  • How can I get rid of album artwork in lists?

    I cannot view my song catalog by album or genre in iTunes 11 without artwork - images of albums, or, more often, a blank square witha treble clef. This makes scrolling through lists very difficult. Is there a way to shift to a simple text list?

  • Double rewards for a single answer :p

    Hi SDN, having a look at my list of rewarded points the following thread: DYNPRO_SEND_IN_BACKGROUND is listed with 6 <b>and</b> 10 points for my only answer to the thread. Although i shouldn't actually complain about this it shouldn't be possible nor

  • Crystal XI Reports and Visual Foxpro Tables

    Post Author: Pamela Vincent CA Forum: Data Connectivity and SQL I have installed XI and tried to add an ODBC connection for a database package written in Visual Foxpro 9.0 In online help there appears to be a Foxpro option in the connection info diag

  • Logo Overlaps with Data when viewed in HTML mode

    Hi, My report looks weird when I view the report in HTML mode i.e., my report logo overlaps with the report Row headers but when i view the same report in Java Applet mode report looks good and also when I export the report it looks good. I tried del