Using Unix tools with Oracle on Windows

I currently have an Oracle database installed on Windows, but would like to use Unix scripting tools and commands when connecting with my database.
Can anyone advise a good way to configure my system so that I can connect to my database from a Unix-like system?
Thanks in advance!

If getting a unix like system installed is not feasible (or is not an option), use this:
http://cygwin.com/
http://www.cygwin.com/mirrors.html
This is not exactly like running unix tools against the Oracle db, but allows you access to unix type shell for almost all of your unix shell scripting needs.
Message was edited by:
Kamal Kishore

Similar Messages

  • Error by patching of SAP NetWeaver with Oracle on Windows

    Hi!
    I am about to patch my SAP system (SAP NetWeaver 04S) with Oracle on Windows.
    The following error occurs:
    Import phase 'CHECK_REQUIREMENTS' (05.02.2008, 16:00:39)
    Error during executing the tp command 'tp CONNECT B11   ...'
    tp return code: '0232' , tp message: 'connect failed' , tp output:
      This is tp version 340.16.38 (release 640, unicode enabled)
      ERROR: Connect to B11 failed (20080205160039, probably wrong environment).
      TRACE-INFO: 1: [dev trc ,00000] Tue Feb 05 16:00:39 2008 29426 0.029426
      TRACE-INFO: 2: [dev trc ,00000] load shared library (dboraslib.dll), hdl 0 38 0.029464
      TRACE-INFO: 3: [dev trc ,00000] using "E:\usr\sap\B11\SYS\exe\run\dboraslib.dll" 35 0.029499
      TRACE-INFO: 4: [dbsloci. ,00000] *** ERROR => Cannot connect: dbs/ora/tnsname in Profile missing
      TRACE-INFO: 5:   738 0.030237
      tp returncode summary:
      TOOLS: Highest return code of single steps was: 0
      ERRORS: Highest tp internal error was: 0232
    standard output from tp and from tools called by tp:
    Can some one give some recommendations how to proceed?
    Is something wrong with oracle environment?
    Thank you very much!
    regards
    Thom

    Hi!
    Thank you!
    The problem is my <sapsid> user (B11adm) cannot log in into OS, but I can log in as an other <sid>user (E11adm) or as administrator.
    If I execute the command as other <sid>user (E11adm) >r3trans -d I retrieve:
    2EETW169 no connect possible: "DBMS = ORACLE
    --- ORACLE_SID = 'B11'"
    r3trans finished (0012).
    The result of commando  >set:
    ALLUSERSPROFILE=C:\Documents and Settings\All Users
    APPDATA=C:\Documents and Settings\e11adm\Application Data
    CLIENTNAME=DZEM9001
    ClusterLog=C:\WINDOWS\Cluster\cluster.log
    CommonProgramFiles=C:\Program Files\Common Files
    COMPUTERNAME=OLDA8005
    ComSpec=C:\WINDOWS\system32\cmd.exe
    dbms_type=ORA
    HOMEDRIVE=C:
    HOMEPATH=\Documents and Settings\e11adm
    JAVA_HOME=C:\j2sdk1.4.2_06
    LOGONSERVER=
    OLDA8001
    NUMBER_OF_PROCESSORS=2
    ORACLE_HOME=E:\oracle\B11920
    ORACLE_SID=B11
    OS=Windows_NT
    Path=C:\j2sdk1.4.2_06\bin;E:\usr\sap\B11\sys\exe\run;E:\oracle\B11920\jre\1.4.2\
    bin\client;E:\oracle\B11920\jre\1.4.2\bin;E:\oracle\B11920\bin;C:\Program Files\
    Oracle\jre\1.3.1\bin;C:\Program Files\Oracle\jre\1.1.8\bin;E:\oracle\BWX920\bin;
    C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Microso
    ft SQL Server\80\Tools\Binn\;E:\usr\sap\B11\SYS\exe\uc\NtI386
    PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH
    PROCESSOR_ARCHITECTURE=x86
    PROCESSOR_IDENTIFIER=x86 Family 6 Model 8 Stepping 6, GenuineIntel
    PROCESSOR_LEVEL=6
    PROCESSOR_REVISION=0806
    ProgramFiles=C:\Program Files
    PROMPT=$P$G
    RFCEXEC_SEC_FILE_PATH=E:\usr\sap\B11\SYS\exe\uc\NtI386
    SESSIONNAME=RDP-Tcp#22
    SystemDrive=C:
    SystemRoot=C:\WINDOWS
    TEMP=C:\DOCUME1\e11adm\LOCALS1\Temp\1
    TMP=C:\DOCUME1\e11adm\LOCALS1\Temp\1
    USERDNSDOMAIN=PORTAL.LOCAL
    USERDOMAIN=PORTALGROUP
    The result of >tnsping B11
    TNS Ping Utility for 32-bit Windows: Version 9.2.0.5.0 - Production on 07-FEB-20
    08 10:34:15
    Copyright (c) 1997 Oracle Corporation.  All rights reserved.
    Used parameter files:
    E:\oracle\B11920\network\admin\sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (SDU = 32768) (ADDRESS_LIST = (ADDRESS = (C
    OMMUNITY = SAP.WORLD) (PROTOCOL = TCP) (HOST = olda8005) (PORT = 1527))) (CONNEC
    T_DATA = (SID = B11) (GLOBAL_NAME = B11.WORLD)))
    OK (50 msec)
    USERNAME=e11adm
    USERPROFILE=C:\Documents and Settings\e11adm
    windir=C:\WINDOWS
    Is it possible to activate the right sidadm user (B11adm)?
    Thank you!
    regards
    Thom

  • How can I use Pro Tools with Fire Wire interface and Fire Wire external HD

    If Mac Book Pro has only one Fire Wire jack and I want to use Pro Tools with an M Audio 1814 Firewire interface and an external Fire Wire HD how do I get around the fact that the Mac Book Pro has only one fire wire jack?
    Pro Tools says that they DO NOT recommend using USB 2 Hard Drives as recording drives.
    If I plug my External HD into the back of the 1814 it is way too slow to use, plus, I don't know why, Pro Tools is telling me that my External HD is "not a valid audio volume". It is a MAC OS Extended (Journaled) external Firewire HD (G-TECH G-DRIVE)
    Stuck...
    Message was edited by: Mezcla

    You could consider getting a firewire hub or get an expresscard 34 adapter with a firewire port.
    I personally use a Belkin Expresscard adapter that gives me an extra firewire port as well as two extra USB 2.0 ports.
    Good luck.

  • Anybody used PlateSpin or other V2V migration tool with Oracle VM?

    We're actually trying to migrate a few XenServer guests over to Oracle VM, but would like a more comprehensive package. One that keeps coming up is PlateSpin, although I'm by no means glued to it as a solution. Ideally, this would be a reasonably flexible tool with p2v, v2v, and v2p. Our environment is currently Xen only (Citrix and Oracle).

    886374 wrote:
    In playing with OVM 3.0 so far, I see that local storage (as the accelerator cards would show up as) is not really appropriate for Production Systems. It appears I can create a Repository on these physical disks and then present them to the VM guest. Then add these cards as my Oracle ASM disks for my "fast" storage, and do the same with the FiberChannel storage for my "slower" disks. Because of the Local Storage used for part of the VM guest, we would not be able to Live Migrate or have High availability I believe. Is this correct? Correct.
    If we have the two servers as mentioned above. Could I take the database VM down and move it to the other server (ie Cold Migration)? The local storage (ie the accelerator cards) could physically be moved from one physical host to the other, but I am not sure, since this is virtualized disk, if this would be allowed or how it could be recognized on the other Oracle VM Server. This might be a "dream", but might be something we would like to do if an emergency arose or if more maintenance needed to be done on Server 1. Any comments?You would have to shutdown the VM, migrate it to an unassigned folder, edit the VM, remove the local storage from Server 1, migrate to Server 2 and add the local storage from Server 2. Whether or not this means moving your hardware is up to you.
    Does anyone have other solutions to run Solid State disks (approximately 2TB worth) and Oracle VM, running a single guest that runs the database? We have been unable to find iSCSI or FiberChannel enclosures that can handle solid state drives in the 1 to 2 TB range of capacity. So perhaps there are other enclosures out there that can act as Shared Storage that can operate with SSDs. Thank you for any comments and suggestions.I know the Sun storage appliances use hybrid storage, i.e. a mix of SSD/spinning disks, but I don't know the details. Our Exadata machines have 5TB of Flash storage per cell, so I know we have such options.

  • Using OLE DB with C++ on Windows

    I am trying to get this sample code (see below) to run.
    Has anyone been successful in running this code (OleMR)?
    I get An E_FAIL error from the pIDBInitialize->Initialize() statement in the OLEHandler.cpp file.
    Thank You,
    Fred
    Sample code came from Oracle.
    http://otn.oracle.com/sample_code/tech/windows/ole_db/oledb8/content.html
    Oracle Technology Network > Sample Code > Windows > OLE DB > Oledb8 >
    VC++ OLEDB Samples
    Returning Multiple recordsets and passing NLS data to a stored procedure using OLEDB Interfaces with VC++ Download (ZIP,100 KB)
    This sample shows how to return multiple recordsets from an Oracle stored procedure and pass NVARCHAR2 parameters to the stored procedure using OLEDB Interfaces with VC++.

    If getting a unix like system installed is not feasible (or is not an option), use this:
    http://cygwin.com/
    http://www.cygwin.com/mirrors.html
    This is not exactly like running unix tools against the Oracle db, but allows you access to unix type shell for almost all of your unix shell scripting needs.
    Message was edited by:
    Kamal Kishore

  • Is it possible to re-use Unix tools?

    Hi I have done some programming in C on Linux, lately I had some ideas for Mac apps and now I had some questions...
    I suppose I'll had to learn Obj-C and Cocoa as that's the standard tool for programming on Mac? Now, what about Unix tools it came with, can I reuse it to some degree? I believe some GUI Tools that OS X came with reuses stuff from Darwin... like how Disk Utility uses diskutil...
    So I am wondering, if I want to start programming, what should I learn first? As I said I am comfortable with C and coming from Linux background, I know a fair a bit about Unix, shell script, cron jobs, etc... but I like to know what's the way to make GUI Apps on OS X that possibly reuses functionality from Darwin Unix tools? Like, does Cocoa provides this functionality or?
    And I do have set aside some money to invest in a Cocoa book or something.
    Thanks.
    Message was edited by: Sunnnz
    Message was edited by: Sunnnz

    Sunnnz,
    One option is to use standard C calls to execute other command line tools, such as system(), fork()/exec(), popen()/pclose().
    But if your intent is to build full fledged GUI applications then you probably will need to learn some of the other APIs and frameworks. Cocoa gives you NSTask that wraps much of the complexity of executing other tools into an Objective-C class.
    AppleScript (using Script Editor) provides a "do shell script" command that can be used to execute shell scripts or command line tools from within your AppleScripts. However, plain vanilla AppleScript doesn't give you very much versatility for creating a GUI interface for your application.
    But, you can also use AppleScript Studio (within Xcode). AS Studio merges AppleScript with Cocoa interface components and allows you to easily create fairly complex GUI applications that wrap calls to command line tools (using "do shell script").
    One drawback of using "do shell script" within AppleScript or AS Studio is that you can't create an interactive session with the tool you launch. In other words, "do shell script" expects the tool it runs to run to completion before the "do shell script" call returns control to your AppleScript. In many cases this is not a problem since you can pass the tool or shell script all the command line options it needs and then get the result of the execution back into an AppleScript variable. But if your app truly needs to interact with the other process while it's running then you'll have to use something like Cocoa's NSTask or the lower level C functions mentioned above.
    Steve

  • Problems using SQL*Loader with Oracle SQL Developer

    I have been using TOAD and able to import large (milllions of rows of data) in various file formats into a table in an Oracle database. My company recently decided not to renew any more TOAD licenses and go with Oracle SQL Developer. The Oracle database is on a corporate server and I access the database via Oracle client locally on my machine. Oracle SQL Developer and TOAD are local on my desktop and connected through TNSnames using the Windows XP platform. I have no issues with using SQL*Loader via the import wizard in TOAD to import the data in these large files into an Oracle table and producing a log file. Loading the same files via SQL*Loader in SQL Developer, freezes up my machine and I cannot get it to produce a log file. Please help!

    I am using SQL Developer version 3.0.04. Yes, I have tried it with a smaller file with no success. What is odd is that the log file is not even created. What is created is a .bat file a control file and a .sh file but no log file. The steps that I take:
    1.Right click on the table I want to import to or go to actions
    2. Import Data
    3. Find file to import
    4. Data Preview - All fields entered according to file
    5. Import Method - SQL Loader utility
    6. Column Definitions - Mapped
    7. Options - Directory of files set
    8. Finish
    With the above steps I was not able to import 255 rows of data. No log file was produced so I don't know why it is failing.
    thanks.
    Edited by: user3261987 on Apr 16, 2012 1:23 PM

  • Unix flavor with oracle

    Hi,
    currently our databasese are on AIX, but we plan to purchase new sever of unix except IBM.so colud you please tell en which unix flavor is more compatible with oracle sun solaris or HP unix and we also need list of SAN configuration. thanks a lot in advacne.

    Hello,
    In our firm we use HP-UX Itanium so, I can tell you that we experienced several bugs with this platform.
    Not Oracle bug but hardware and OS bugs.
    About the hardware bug, I was not able to patch the database (From 10.2.0.1 to 10.2.0.4) I got an ORA-07445.
    After calling Oracle support they quickly answered that I have to change the CPU.
    And they were right. One CPU was wrong. The problem was solved after changing it.
    The exact error was the following:
    ORA-07445: exception encountered: core dump [pxnmove()+33] [SIGSEGV] [Address not mapped to object] [0x97FFFFFFBF3D2EF0] [] []You can have more detail on the Note ID 398526.1 of Metalink.
    About the OS bug. It was about de C compiler of HP-UX Itanium.
    This issue was so well known that from 10.2.0.3 Oracle delivers pre-compiled module.
    You'll find more detail on the Note 386727.1 from Metalink.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Nov 21, 2009 12:03 PM

  • Using perspective tool with text in AI CS5

    Hi,
    I am new to AI CS5 and was wondering if someone could please tell me how to use the Perspective tool with text?  Also, does the text have to be expanded in order to use the perspective tool or is that not recommended? 
    Thank you,
    Diane

    Hi Diane,
    Here's a video showing how to do it http://vector.tutsplus.com/tutorials/tools-tips/quick-tip-put-text-into-perspective-using- illustrator-cs5/
    For more, please refer to the Illustrator CS5 help docs on Perspective drawing.

  • Statement closed when using callable statements with oracle xe

    hi all, i've got this problem with oracle express edition 10g. I am using also oc4j v10.1.2.0.2. When working with a normal oracle database it was working fine (i think the code was the same, it's some time since i last tried, but you can see the code is very simple).
    So i just create a callable statement like this:
    CallableStatement cs = con.prepareCall(sentencia.toString(), ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE, ResultSet.HOLD_CURSORS_OVER_COMMIT);
    and then when trying to access the statement to register an out parameter like this
    cs.registerOutParameter(1, parámetros[0].getTipo());
    it gives this error:
    java.sql.SQLException: Statement was closed
    It's puzzling me because, as i said before, i think the same code was working ok with a normal oracle database.
    Any idea what can it be?
    cheers

    Ah okay, sorry I've re-read your post.
    I believe you need to create a clob object that encapsulates your xml file.
    I've never done this but I would image it involves creating a class that implements the clob interface and passing an instantiation of this class to the callablestatement.
    Let me know how you get on.

  • Using Java Dictionary With Oracle

    Can Someone tell how can i configure my java dictionary with oracle...Currently if a create a table from my IDE and then deploy it it  deploys to the Default SAP DB ...how do  i directrly create  tables in oracle by just  specifying the Deploy option ...
    And further  on  I want my web Dynpro application to talk with oracle db..how do i acheive this..
    Please Help!!

    Hi, 
             Does this mean that the additional features of The Java Dictionary (bufferring etc ) can only be implemented if we ONLY connect to the SAP DB---- YES
    If I have to utilize the services of Java Dictionary in my application i can only use The SAP DB database -- YES
    You deploy a dictionary project to create a table in the DB.
    When it is a DB created and used by Web AS ,it is possible to create tables in that DB with dictionary project using the standard connection parameters specified in visual admin.
    Java Dictionary purpose is to create a DB abstraction by providing a standard interface to create tables whatever the DB used by WebAS maybe.
    Regards
    Bharathwaj
    Edited by: Bharathwaj R - Hope this makes it clear !

  • Using AutomatedTesting tools with Livecycle

    Hi All,
    I wanted to know the possiblity of using automated testing tools with LivecycleES Workflows.
    Does livecycle support integration with RFT or any other testing tools.Any pointers will be highly appreciated.
    Thanks ,
    Leena

    I did find out that Cinema Tools doesn't support 2 perf at all, which brings up a whole new question of workflow. Hmm...

  • Using OWB mappings with Oracle CDC/Streams and LCRs

    Hi,
    Has anyone worked with Oracle Streams and OWB? We're looking to leverage Streams to update our data warehouse using Streams to apply changes from the transactional/source DB. At some point we seem to remember hearing that OWB could leverage Streams, perhaps even using the Logical Change Records (LCRs) from Streams as input to mappings?
    Any thoughts much appreciated.
    Thanks,
    Jim Carter

    Hi Jim,
    We've built a fairly complex solution based on streams. We wanted to break up the various components into separate entities so that any network failure or individual component failure wouldn't cause issues for the other components. So, here goes:
    1) The OLTP source database is streaming LCR's to our Datawarehouse where we keep an operational copy of production, updated daily from those streams. This allows for various operational reports to be run/rerun in a given day with the end-of-yesterday picture without impacting the performance on the source system.
    2) Our apply process on the datamart side actually updates TWO copies of data. It does a default apply to our operational copy of production, and each of those tables have triggers that put a second copy of the data into daily partitioned tables. So, yesterday's partitions has only the data that was actually changed yesterday. After the default apply, we walk the Oracle dependency tree to fill in all of the supporting information so that yesterday's partition includes all the data needed to run our ETL queries for that day.
    Example: Suppose yesterday an address for a customer was updated. Streams only knows about the change to the address record, so the automated process would only put that address record into the daily partition. The dependency walk fills in the associated customer, date of birth, etc. data into that partition so that the partition holds all of the related data to that address record for updates without having to query against the complete tables. By the same token, a change to some other customer info will backfill in the adress record for this customer too.
    Now, our ETL queries run against views created against these partitoned tables so that they are only looking at the data for that day (the view s_address joins from our control tables to the partitiond address table so that we are only seeing one day's address records). This means that the ETL is running agains the minimal subset of data required to update dimensions and create facts. It also means that, for example, if there is a problem with the ETL we can suspend running ETL while we fix a problem, and the streaming process will just go on filling partitions until we are ready to re-launch ETL and catch up - one day at a time. We also back up the data mart after each load so that, if we discover an error in ETL logic and need to rebuild we can restore the datamart to a given day and then reprocess the daily partitions in order very simply.
    We have added control fields in those partitioned tables that show which record was inserted/updated/or deleted in production, and which was added by the dependency walk so, if neccessary, our ETL can determine which data elements were the ones that changed. As we do daily updates to the data mart as our finest grain, this process may update a given record in a given partition multiple times so that the status of this record at the end of the day in that daily partition shows the final version of that record for the day. So, for example, if you add an address record an then update it on the same day the partition for that day will show the final updated version of the record, and the control field will show this to be a new inserted record for the day.
    This satisfies our business requirements. Yours may be different.
    We have a set of control tables which manage what partition is being loaded from streams, and which have been loaded via ETL to the datamart. The only limitation is that, of course, the ETL load can only go as far as the last partition completely loaded and closed from streams. And we manage the sizing of this staging system by pruning partitions.
    Now, this process IS complex, and requires a fair chunk of storage, but it provides us with the local daily static copy of the OLTP system for running operational reports against without impacting production, and a guaranteed minimal subset of the OLTP system for speedy ETL runs.
    As for referencing LCRs themselves, we did not go that route due to the dependency issues (one single LTR will almost never include all of the dependant data from which to update a dimension record or build a fact record, so we would have had to constantly link each one with the full data set to get all of that other info).
    Anyway - just thought our approach might give you some ideas as you work out your own approach.
    Cheers,
    Mike

  • Using multipool Connections with Oracle 8i OPS Parallel Server

    Does anyone have any experience using multipool JDBC connections with Oracle 8i OPS? During a oracle node failure, we are seeing all of the connections fail. Any help you could give me her would be greatly appreciated!

    hal lavender wrote:
    Hi,
    I am trying to achieve Load Balancing & Failover of Database requests to two of the nodes in 8i OPS.
    Both the nodes are located in the same data center.
    Here comes the config of one of the connection pools.
    <JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
    DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
    InitialCapacity="10" MaxCapacity="25" Name="db1Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
    Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
    TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
    TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.160:1421:dbinst01" />
    <JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
    DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
    InitialCapacity="10" MaxCapacity="25" Name="db2Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
    Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
    TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
    TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.161:1421:dbinst01" />
    <JDBCMultiPool AlgorithmType="Load-Balancing" Name="pooledConnection598011"
    PoolList="db1Connection598011,db2Connection598011" Targets="ngusCluster12,ngusCluster34" />
    Please let me know , if you need further information
    HalHi Hal. All that seems fine, as it should be. Tell me how you
    enact a failure so that you'd expect one pool to still be good
    when the other is bad.
    thanks,
    Joe

  • Using PL/PDF with Oracle XE

    Hi, I'm looking for a paper reporting solution to use with my HTMLDB apps. PL/PDF sounds like it will do the job, but has anyone successfully integrated it with Oracle XE?

    pl/pdf uses java:
    - compressing page (plpdf_comp package): we issue a modified package for XE (without compress) see http://plpdf.com/dokumentumok/plpdf_comp_xe.sql.txt with some restrictions
    - image (Oracle Intermedia): it's a option plpdf can work without Intermedia
    You can try plpdf with XE.
    LL

Maybe you are looking for