Oracle 10, Solaris and ZFS

Hello,
I'm planning to run Oracle 10 under Solaris 10 with a ZFS filesystem. Is Oracle 10 compatible with ZFS? The Solaris-ARC-process uses most of the available memory (RAM) for caching purposes. As other processes demand more memory, ARC releases it. Is such a dynamic memory allocation compatible with Oracle or does Oracle need fixed memory allocations?
Thanks,
- Karl-Josef

In principle all should be fine. ZFS obeys all filesystem semantics, and Oracle will access it through the normal filesystem APIs. I'm not sure if Oracle need to officially state that they are compatible with ZFS. I would have thought it was the other way around - ZFS needs to state it is a fully compatible file system, and so any application will work on it.
ZFS has many neat design features in it. But be aware - it is a write only file system! It never updates an existing block on disk. Instead it writes out a new block in a new location with the updated data in it, and also writes out new parent inode blocks that point to this block, and so on. This has some benefits around snapshotting a file system, and providing fallback recovery or quick recovery in the event of a system crash. However, one update in one data block can cause a cascaded series of writes of many blocks to the disk.
This can have a major impact if you put your redo logs on ZFS. You need to consider this, and if possible do some comparison tests between ZFS and UFS with logging and direct I/O. Redo log writes on COMMIT are synchronous and must go all the way to the disk device itself. This could cause ZFS to have to do many physical disk writes, just for writing one redo log block.
Oracle needs its SGA memory up front, permanently allocated. Solaris should handle this properly, and release as much filesystem cache memory as needed when the Oracle shared memory is allocated. If it doesn't then Sun have messed up big time. But I cannot imagine this, so I am sure your Oracle SGA will be created fine.
I like the design of ZFS a lot. It has similarities with Oracle's ASM - a built in volume manager that abstracts underlying raw disks to a pool of directly useful storage. ASM abstracts to pools for database storage objects, ZFS abstracts to pools for filesystems. Much better than simple volume managers that abstract raw disks to just logical disks. You still end up with disks, and other management issues. I'm still undecided as to whether it makes sense to store an OLTP database on it that needs to process a high transaction rate, given the extra writes incurred by ZFS.
I also assume you are going to use an 8 KB database block size to match the filesystem block size? You don't want small database writes leading to bigger ZFS writes, and vice versa.
John

Similar Messages

  • Oracle on Solaris and JAVA app on Windows clients

    hi im currently developing a JAVA app that accesses the data from an Oracle databse, these apps all running on windows clients. Im wondering the best way to access the database? is the JDBC the only way to do this.. thanks in advance

    Yes, you can download a JDBC driver from Oracle. I think this site might do it:
    http://otn.oracle.com/software/tech/java/sqlj_jdbc/content.html
    I'm assuming that you're using the JDBC-ODBC bridge driver now to get into Access. You'll just have to change the name of the JDBC driver class and the database URL to connect to Oracle.
    Once you have it, you'll have to make sure you put it in the right place for your app. If it's a Web-based app, it'll depend on the container you're running.
    - MOD

  • Migration from Netware 6.x  NSS to Solaris 10 ZFS

    Hi,
    I am looking at Solaris and ZFS as being the possible future of our file store for users.
    We have around 3TB (40million files) of data to transfer to ZFS from Netware 6.5 NSS volumes.
    What is the best way to do this?
    I have tried running utilities like richcopy (son of robocopy) , teracopy, fastcopy from a Windows client mapped to the Netware server via NCP and the Solaris server via Samba.
    In all tests the copy very quickly failed, rendering the Solaris server unusable. I imagine this has to do with the utilities expecting a destination NTFS filesystem, and ZFS combined with Samba does not fit the bill.
    I have tried running the old rsync client from Netware, but this does not seem to talk to Solaris rsyncd.
    As well as NCP, Netware has the ability to export its NSS volumes as a CIFS share.
    I have tried mounting a CIFS share of the Netware volume on Solaris...but Solaris as far as I am aware does not support mount -t smbfs as this is Linux only. You can mount smb:// from the Gui (Nautilus), but this does not help a great deal. I was hoping to run maybe Midnight Commander, but I presume that I would need a valid smb share to the Netware volume from the command line?
    I really want to avoid the idea of staging on say NTFS first, then from NTFS to ZFS. A two part copy would take forever. It needs to be direct.
    BTW..I am not bothered about ACL's or quota. These can be backed up from Netware and reapplied with ZFS/chown/chmod commands.
    A wild creative though did occur to me as follows -
    Opensolaris, unlike Solaris, has its CIFS kernel addition, and hence smb mounts from the command line (I presume), but I am not happy running opensolaris in production.So maybe I could mount the Netware NSS volume as a CIFS share on opensolaris (as a staging server), copy all the data to a ZFS pool locally, and the do a send receive to Solaris 10.......
    Maybe not...
    I suppose there is FTP, if I can get it to work on Netware.
    I really need a utility with full error checking, and that can be left unattended.
    Any ideas?

    Bu unusable I mean that the mapped ZFS Samba drive to the windows workstation died and was inaccessible.
    Logging onto the solaris box after this from the console was almost impossible. There was a massive delay. When I did log in there appeared to be no network at all. There were no errors in the smbd log file. I need to look at other logs to find out what is going on. Looking at the ZFS filesystem some files had copied over before it died.
    After rebooting the Solaris box I then tried dragging and dropping the same files to the ZFS filesystem with the native windows expolrer interface on the windows client. This worked, as in the Solaris box did not die and the files were copying happily (until I manually stopped it). As we all know Windows explorer is not a safe unattended way to copy large amounts of files.
    This tells me that the copy utilities on Windows are the problem, not native windows copy/paste.

  • Parameters of NFS in Solaris 10 and Oracle Linux 6 with ZFS Storage 7420 in cluster without database

    Hello,
    I have ZFS 7420 in cluster and OS Solaris 10 and Oracle Linux 6 without DB and I need mount share NFS in this OS and I do not know which parameters are the best for this.
    Wich are the best parameters to mount share NFS in Solaris 10 or Oracle Linux 6?
    Thanks
    Best regards.

    Hi Pascal,
    My question is because when We mount share NFS in some servers for example Exadata Database Machine or Super Cluster  for best performance we need mount this shares with specific parameters, for example.
    Exadata
    192.168.36.200:/export/dbname/backup1 /zfssa/dbname/backup1 nfs rw,bg,hard,nointr,rsize=131072,wsize=1048576,tcp,nfsvers=3,timeo=600 0 0
    Super Cluster
    sscsn1-stor:/export/ssc-shares/share1      -       /export/share1     nfs     -       yes     rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    Now,
    My network is 10GBE
    What happen with normal servers only with OS (Solaris and Linux)?
    Which parameters I need use for best performance?
    or are not necessary specific parameters.
    Thanks.
    Best regards.

  • Where can I find the latest research on Solaris 10, zfs and SANs?

    I know Arul and Christian Bilien have done a lot of writing about storage technologies as they related to Oracle. Where are the latest findings? Obviously there are some exotic configurations that can be implemented to optimizer performance, but is there a set of "best practices" that generally work for "most people"? Is there common advice for folks using Solaris 10 and zfs on SAN hardware (ie, EMC)? Does double-striping have to be configured with meticulous care, or does it work "pretty well" just by taking some rough guesses?
    Thanks much!

    Hello,
    I have a couple of links that I have used:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
    http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
    These are not exactly new, so you may have encountered them already.
    List of ZFS blogs follows:
    http://www.opensolaris.org/os/community/zfs/blogs/
    Again, there does not seem to be huge activity on the blogs featured there.
    jason.
    http://jarneil.wordpress.com

  • Will solaris 10 ZFS supports oracle 9i ora ...???

    hi friends
    currently we r using solaris 8 with oracle 9i... now we want to upgrate it to solaris 10 ZFS(zetabyte file system) and oracle 9i. so will this combition work..?
    Thanks & Regards

    hi friends
    currently we r using solaris 8 with oracle 9i... now we want to upgrate it to solaris 10 ZFS(zetabyte file system) and oracle 9i. so will this combition work..?
    Thanks & Regards

  • Installation and creation of first database in Oracle on Solaris 5.10

    hi all,
    I have dug around and found that /etc/system is no longer used in solaris 10. i found that the command prctl is now envoked to setup the kernel parms.
    I have solaris 10 - oracle 10g - Intel Processor.
    Can anyone point me in the right direction or provide me a link to the doc I need to setup correct kernel parms in my version of solaris and to get dbca to just create an intial database. I keep getting the oracle not available, shared memory, error messages on inital dbca creation of database and I know it is because of kernel parms not being set right. I have followed the instructions found under documentation here but it gives me argument errors when issuing:
    prctl -n project.max-sem-nsems -v 512 -r -i project default
    All other params work fine and when I check them they show up as correct. The only doc I could find was for SPARC. I didnt know if this made a difference in setting the parms but I assume it does.
    Thanks.

    Try this link
    http://www.dbspecialists.com/presentations/oracle10gsolaris.html
    Regards
    Raman

  • Oracle 9i Database and Solaris 10 Zones

    Can an existing oracle 9i database be moved into a new zone? The database resides on it's own filesystem. The server is runnign Solaris 10, and the zones are not set up yet, but Oracle is installed, and the 2 databases are up and running.
    Basically there are 2 existing oracle 9i databases, and I want to setup 2 zones, where none other than the default global exist right now, and have each database in a zone.
    Thanks in advance.

    You need to do the following -
    Configure loopback mount points from the global zone into the local zone through zonecfg (one for Oracle binary, other for Oracle data). I am assuming that you want to share the same Oracle binary location between all the zones. The Oracle database mounts must be separate & make sure that you put them in the respective zone's config only.
    Create an oracle user with dba group in both the zones. It's best if the user IDs & group IDs across all the zones & global zone match.
    Stop both the database instances in the global zone.
    zlogin to a zone, su as oracle and startup the instances.
    Hope that works!

  • Solaris 10 and Hitachi LUN mapping with Oracle 10g RAC and ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    You don't seem to state categorically that you are using Solaris Cluster, so I'll assume it since this is mainly a forum about Solaris Cluster (and IMHO, Solaris Cluster with Clusterware is better than Clusterware on its own).
    Clusterware has to see the same device names from all cluster nodes. This is why Solaris Cluster (SC) is a positive benefit over Clusterware because SC provides an automatically managed, consistent name space. Clusterware on its own forces you to manage either the symbolic links (or worse mknods) to create a consistent namespace!
    So, given the SC consistent namespace you simple add the raw devices into the ASM configuration, i.e. /dev/did/rdsk/dXsY. If you are using Solaris Volume Manager, you would use /dev/md/<setname>/rdsk/dXXX and if you were using CVM/VxVM you would use /dev/vx/rdsk/<dg_name>/<dev_name>.
    Of course, if you genuinely are using Clusterware on its own, then you have somewhat of a management issue! ... time to think about installing SC?
    Tim
    ---

  • Differences between Oracle JDBC Thin and Thick Drivers

    If any body is looking for this information...
    ============================================================
    I have a question concerning the Oracle JDBC thin vs. thick drivers
    and how they might affect operations from an application perspective.
    We're in a Solais 8/Oracle 8.1.7.2 environment. We have several
    applications on several servers connecting to the Oracle database.
    For redundancy, we're looking into setting up TAF (transparent
    application failover). Currently, some of our apps use the Oracle
    <B>JDBC thin</B> drivers to talk to the database, with a connection
    string that like this:
    <B> jdbc:oracle:thin:@host:port:ORACLE_SID </B>
    In a disaster recovery mode, where we would switch the database
    from one server to another, the host name in the above string
    would become invalid. That means we have to shut down our application
    servers and restart them with an updated string.
    Using the Oracle <B>OCI (thick)</B> driver though, allows us to connect
    to a Net8 service instead of a specific server:
    <B> jdbc:oracle:oci8:@NET8_SERVICE_NAME </B>
    Coupled with the FAILOVER=ON option configured in Net8, it is
    then possible to direct a connection from the first server to
    the failover database on another server. This is exactly what
    we would like to do.
    My question is, from an application perspective, how is the Oracle
    thick driver different from the thin driver? If everything
    else is "equal" (i.e. the thick driver is compatible with the
    app servers) would there be something within the the thick/OCI
    driver that could limit functionality vs. the thin driver?
    My understand, which obviously is sketchy, is that the thick
    driver is a superset of the thin driver. If this is the case,
    and for example if all database connections were handled through
    a configuration file with the above OCI connection string, then
    theoretically the thick driver should work.
    ============================================================
    <B>
    In the case with the Oracle, they provide a thin driver that is a 100% Java driver for client-side use without the need of an Oracle installation (maybe that's why we need to input server name and port number of the database server). This is platform indipendent, and has good performance and some features.
    The OCI driver on the other hand is not java, require Oracle installation, platform dependent, performance is faster, and has a complete list of all the features.
    </B>
    ========================================================
    I hope this is what you expect.
    JDBC OCI client-side driver: This is a JDBC Type 2 driver that uses Java native methods to call entrypoints in an underlying C library. That C library, called OCI (Oracle Call Interface), interacts with an Oracle database. <B>The JDBC OCI driver requires an Oracle (7.3.4 or above) client installation (including SQL*Net v2.3 or above) and all other dependent files.</B> The use of native methods makes the JDBC OCI driver platform specific. Oracle supports Solaris, Windows, and many other platforms. This means that the Oracle JDBC OCI driver is not appropriate for Java applets, because it depends on a C library to be preinstalled.
    JDBC Thin client-side driver: This is a JDBC Type 4 driver that uses Java to connect directly to Oracle. It emulates Oracle's SQL*Net Net8 and TTC adapters using its own TCP/IP based Java socket implementation. <B>The JDBC Thin driver does not require Oracle client software to be installed, but does require the server to be configured with a TCP/IP listener. Because it is written entirely in Java, this driver is platform-independent.</B> The JDBC Thin driver can be downloaded into any browser as part of a Java application. (Note that if running in a client browser, that browser must allow the applet to open a Java socket connection back to the server.
    JDBC Thin server-side driver: This is another JDBC Type 4 driver that uses Java to connect directly to Oracle. This driver is used internally by the JServer within the Oracle server. This driver offers the same functionality as the client-side JDBC Thin driver (above), but runs inside an Oracle database and is used to access remote databases. Because it is written entirely in Java, this driver is platform-independent. There is no difference in your code between using the Thin driver from a client application or from inside a server.
    ======================================================
    How does one connect with the JDBC Thin Driver?
    The the JDBC thin driver provides the only way to access Oracle from the Web (applets). It is smaller and faster than the OCI drivers, and doesn't require a pre-installed version of the JDBC drivers.
    import java.sql.*;
    class dbAccess {
    public static void main (String args []) throws SQLException
    DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
    Connection conn = DriverManager.getConnection
    ("jdbc:oracle:thin:@qit-uq-cbiw:1526:orcl", "scott", "tiger");
    // @machineName:port:SID, userid, password
    Statement stmt = conn.createStatement();
    ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION");
    while (rset.next())
    System.out.println (rset.getString(1)); // Print col 1
    stmt.close();
    How does one connect with the JDBC OCI Driver?
    One must have Net8 (SQL*Net) installed and working before attempting to use one of the OCI drivers.
    import java.sql.*;
    class dbAccess {
    public static void main (String args []) throws SQLException
    try {
    Class.forName ("oracle.jdbc.driver.OracleDriver");
    } catch (ClassNotFoundException e) {
    e.printStackTrace();
    Connection conn = DriverManager.getConnection
    ("jdbc:oracle:oci8:@qit-uq-cbiw_orcl", "scott", "tiger");
    // or oci7 @TNSNames_Entry, userid, password
    Statement stmt = conn.createStatement();
    ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION");
    while (rset.next())
    System.out.println (rset.getString(1)); // Print col 1
    stmt.close();
    =================================================================

    Wow, not sure what your question was, but there sure was a lot of information there...
    There really is only one case where failover occurs, and it would not normally be in a disaster recovery situation, where you define disaster recovery as the obliteration of your current server farm, network and concievably the operational support staff. This would require a rebuild of your server, network etc and isn't something done with software.
    Fail over is normally used for high availablity that would take over in case of hardware server failure, or when your support staff wants to do maintenance on the primary server.
    Using the thin and thick driver should have ZERO affect on a failover. Transparent failover will make the secondary server the same IP as the primary, therefore the hostname will still point to the appropriate server. If you are doing this wrong, then you will have to point all your applications to a new IP address. This should be something that you tell your management is UNACCEPTABLE in a fail-over situation, since it is almost sure to fail to fail-over.
    You point out that you are providing the TNSNAME, rather than the HOSTNAME when using the thick driver. That's true within your application, but that name is resolved to either a HOSTNAME, or IP ADDRESS before it is sent to the appropriate Oracle server/instance. It is resolved using either a NAME server (same as DNS server but for Oracle), or by looking at a TNSNAMES file. Since the TNSNAMES files profilerate like rabbits within an organization you don't want a fail over that will make you find and switch all the entries, so you must come up with a fail over that does not require it.
    So, the application should not be concerned with either the hostname, or the IP address changing during fail over. That makes use of the thin or thick client acceptable for fail over.
    Don't know if this will help, but this shows the communication points.
    THIN DRIVER
    client --> dns --> server/port --> SID
    THICK DRIVER
    client --> names server --> dns --> server/port --> SID
    client --> tnsnames     --> dns --> server/port --> SID

  • 10.7 to R11i On Solaris and Custom Interfaces

    Hi,
    We are planning to migrate from 10.7 to R11i on Solaris machine. We have developed custom interfaces with Oracle AP, AR and INV modules. We have GL, AP, AR, Cash Management, Inventory and purchasing modules implemented. We would really appreciate your help on following:
    1. How correct are sizing guidelines specified in R11i Installation manual?
    2. Should we have database and application servers on same machine? If not what type of application server is recommended i.e. NT or Solaris. Any performance issues in either approach?
    3. If we go for single node installations do we have to generate FMX files for custom forms under Solaris?
    4. Is it recommended to have APPL_TOP on one disk or we can split on multiple disks.
    5. Which browser is recommended i.e. Netscape Ver x.x or Internet Explorer x.x?
    6. What sorts of problems were encountered for custom interfaces?
    7. Our development server is supporting 10.7 but we are sure it cant handle R11i upgrade. Is it recommended to carry out upgrade on production machine or should have separate machine to carry out this project?
    8. Any other upgrade issues relevant to R11i upgrade on Solaris?
    We would appreciate if someone share the upgrade problems and experiences.
    Thanks
    null

    Hello TheGlamazon
    We know everyone is super exited about these amazing phones. As your order moves through the process, you can check the status at http://bit.ly/RjmCUB. You may not be able to see a change from Processing until it officially ships, then you'll see the tracking number. If you've received a confirmation email that your order was submitted, any additional information throughout the fulfillment process will be sent to that same email address. Congrats on your new phone!
    TanishaS1_VZW
    Follow us on Twitter @VZWSupport

  • Solaris 10 / ZFS Parameter for Continuous Write Loads

    I have been spending some time working on tuning continuous write load tuning on a Solaris 10/ZFS based system. One of the issues I observed is a drop in throughput every several seconds. Removing the fsync() calls from JE (you would never want to do this normally) smoothed out the dips in throughput which pointed to IO issues.
    My colleague Sam pointed me at this discuss-zfs entry. And indeed, adding
    set zfs:zfs_write_limit_override = 0x2000000
    to /etc/system (with the fsync() calls put back into JE) does in fact seem to smooth things out (0x2000000 = 33MB). Of course, you'll want to adjust that parameter based on what size of on-board disk cache you have.
    One way to get a rough indication of whether this is a potential problem is to use iostat and see if there are IO spikes.
    Charles Lamb

    # prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t2d0s2
    fmthard: New volume table of contents now in place.Most likely the rpool detection on a boot disk is probably less than optimal. I'd start over, do a dd if=/dev/zero of=/dev/rdsk/c1t2d0s2 count=100,
    then do a format -e and add partition data that matches c1t0d0s2 (I know fmthard is supposed to do this, but zfs boot is a strange beast that straddles the old fdisk and new EFI labeling, and fmthard may not have been updated correctly). Then try the zpool attach. If this succeeds, make sure you do an installgrub on the second disk so it will boot.

  • Where can I get Oracle for solaris x86

    Besides click the ContactOTN in web page to submit the question
    about this to them, I follow the link
    http://platforms.oracle.com/sun/index_sun.htm and click "Ask Us"
    link to send message to them too.
    Am I wrong ? Can you give me the correct way to ask the question
    KK
    jeamaro (guest) wrote:
    : KK (guest) wrote:
    : : I read through the news group and find some mention there is
    : : a version Oracle for solaris x86. Then I go to
    : : http://platforms.oracle.com/sun/index_sun.htm too.
    : : Can anyone tell me the straight method to get it ?
    : : I have already sent several mail to ask about that.
    : : There is nothing return.
    : : KK
    : There is a version of Oracle for Solaris x86. Although it has
    : not been available thru OTN. To which email ID have you been
    : sending your questions?
    : Regards
    null

    Besides click the ContactOTN in web page to submit the question
    about this to them, I follow the link
    http://platforms.oracle.com/sun/index_sun.htm and click "Ask Us"
    link to send message to them too.
    Am I wrong ? Can you give me the correct way to ask the question
    KK
    jeamaro (guest) wrote:
    : KK (guest) wrote:
    : : I read through the news group and find some mention there is
    : : a version Oracle for solaris x86. Then I go to
    : : http://platforms.oracle.com/sun/index_sun.htm too.
    : : Can anyone tell me the straight method to get it ?
    : : I have already sent several mail to ask about that.
    : : There is nothing return.
    : : KK
    : There is a version of Oracle for Solaris x86. Although it has
    : not been available thru OTN. To which email ID have you been
    : sending your questions?
    : Regards
    null

  • Max File size in UFS and ZFS

    Hi,
    Any one can share what is max file size can be created in Solaris 10 UFS and ZFS ?
    What will be max size file compression using tar,gz ?
    Regards
    Siva

    from 'man ufs':
    A sparse file  can have  a  logical  size  of one terabyte.
    However, the  actual amount of data that can be stored
    in  a  file  is  approximately  one  percent  less  than one
    terabyte because of file system overhead.
    As for ZFS, well, its a 128bit filesystem, and the maximum size of a file or directory is 2 ^64^ bytes, which i think is somewhere around 8 exabyte (i.e 8192 petabyte), even though my calculator gave up on calculating it.
    http://www.sun.com/software/solaris/ds/zfs.jsp
    .7/M.
    Edited by: abrante on Feb 28, 2011 7:31 AM
    fixed layout and 2 ^64^

  • ORACLE GRID OMS and agent service issue..

    Dear all,
    i am a solaris admin..please help me .. I am facing problem in discover oracle database through oracle agent...
    We installed oracle DB 10g R2 in solaris standalone server ..we have install OMS .. hostname is hostname01
    We have install OMS client at windows server in some domain host.domain.com
    Is it possible to install and configure OMS and Client in non domain [dns] environment?
    because we can able ping ip address from windows to solaris and vice vesa...
    but i can't able to nslookup..because sun server not in that domain... nslookup only search at DNS server ...
    it wouln't resolve hostname to ip address...
    what is procedure to configure in OMS and client in non -DNS environment??

    update the hosts file on all (client, OMS server, db server) sides to be able to ping by name from any server.
    eg.
    solaris /etc/hosts
    10.121.121.3 <servername>.<domian> <alias>
    retry your ping test by name.

Maybe you are looking for

  • Apple Mini-DVI to Video Adapter and Mac mini (2009)

    Can anyone tell me whether the Apple Mini-DVI to Video Adapter listed here: http://store.apple.com/ca/product/M9319G/A works with the latest Mac mini (early 2009)? If it does not what other solutions are there to connect to composite, component or s-

  • ABAP Issue in 40B

    Hi Expert, I met some problem in 40B, need you advice. 1) In selection screen, I use 2 radio button to control the value in another parameter in selection screen, I use below code PARAMETERS: r_local RADIOBUTTON GROUP rad1 DEFAULT 'X' USER-COMMAND uc

  • How to install owb server-side software on UNIX Solaris (SPACR-64)?

    Hi there, How to install owb server-side software on UNIX Solaris (SPACR-64)? I've read the install guide and it mentions 3. Start the installer by entering the following at the prompt: cd mount_point ./runInstaller I don't have access to any graphic

  • How to batch change filenames in iPhoto 09?

    I have a dilema... The photos I've taken with my Leica have a certain filename convention (e.g. L0001, L0002, etc.) and the photos I've taken with my Canon have a different filename convention (e.g. IMG_0001, IMG_0002, etc.). When I import these phot

  • Why Does CS 4 Do This?

    !. PS 7 would zoom to the same extremes at CS 4, but  the result would be CLEAR and easy to make changes. CS 4 shows big pixel squares at only 600! Why? 2. PS 7 would lock a layer when I wanted it locked. But I have several times in opening files in