Element-dependent mapping with oracle fk's

We are experiencing some problems using element-dependent extension, the
problem looks to be that kodo try to remove the parent object before the
children.
Does this extension works with FK.
thanks in advance.

Kodo flushes persistent objects in the order that they join the JDO
transaction. So if the parent object joins the JDO transaction before the
child objects, or the child objects only join the transaction when the
dependent extensions are evaluated, the parent object will be deleted
first.
We recommend using deferred foreign keys whenever possible, or turning
foreign key constraints off. Kodo 3.0, however, will include the ability
to order SQL to meet all foreign key constraints.

Similar Messages

  • How to connect google maps with oracle

    Hello people ,
    i read an artical there is a way to connect google maps with oracle .
    can anyone give us lesson or more information to do that ?
    regards .

    I guess, using Google for the same should be the best way to find the solution for it,
    http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=How+to+connect+google+maps+with+oracle
    HTH
    Aman....

  • Key Mapping with Oracle Reports

    I am currently trying to run report requests using the Oracle Report Server however the problem that I have is that we need to send database username/password information over the network. From a security perspective this is not acceptable.
    One way we were going to get around this is to "wrap" the Oracle Report servlet with another servlet, so the client browser sends the wrapper servlet information such as report location and report name, and the
    wrapper servlet adds in additional information such as the username and
    password, eliminating the need for the client application to know this information.
    The problem is that the URL returned to the client when the report is generated contains the username/password, which again is not
    acceptable. As I have seen mention that Oracle Reports can be run over the Internet,
    I assume that there is a mechanism for hiding this information. Do you know how to achieve this?.
    I am keen to just hard code the user name, password, and database connection into a key mapping file to resolve this security issue. However we are having trouble getting this method to work with the Oracle Report Servlet. We are using WebLogic Server to serve these servlets.

    Hello,
    Here is an extract from Reports documentation:
    Oracle9iAS Reports Services Publishing Reports to the Web
    Release 2 (9.0.2)
    Part Number A92102-01
    3.3.2 Reloading the Key Map File
    Use the RELOAD_KEYMAP parameter to specify whether the key map file
    (cgicmd.dat) should be reloaded each time the servlet receives a request.
    For example:
    RELOAD_KEYMAP=yes
    This is useful if you frequently make changes to the map file and want the
    process of loading your changes to be automatic. Runtime performance will be
    affected according to how long it takes to reload the file.
    Typically, this parameter is set to no in a production environment and yes in
    a testing environment.
    Regards

  • Solaris 10 and Hitachi LUN mapping with Oracle 10g RAC and ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    You don't seem to state categorically that you are using Solaris Cluster, so I'll assume it since this is mainly a forum about Solaris Cluster (and IMHO, Solaris Cluster with Clusterware is better than Clusterware on its own).
    Clusterware has to see the same device names from all cluster nodes. This is why Solaris Cluster (SC) is a positive benefit over Clusterware because SC provides an automatically managed, consistent name space. Clusterware on its own forces you to manage either the symbolic links (or worse mknods) to create a consistent namespace!
    So, given the SC consistent namespace you simple add the raw devices into the ASM configuration, i.e. /dev/did/rdsk/dXsY. If you are using Solaris Volume Manager, you would use /dev/md/<setname>/rdsk/dXXX and if you were using CVM/VxVM you would use /dev/vx/rdsk/<dg_name>/<dev_name>.
    Of course, if you genuinely are using Clusterware on its own, then you have somewhat of a management issue! ... time to think about installing SC?
    Tim
    ---

  • CMP Bean's  Field Mapping with oracle unicode Datatypes

    Hi,
    I have CMP which mapps to RDBMS table and table has some Unicode datatype such as NVARCHAR NCAR
    now i was woundering how OC4J/ oracle EJB container handles queries with Unicode datatypes.
    What i have to do in order to properly develope and deploy CMP bean which has fields mapped onto the data based UNICODE field.?
    Regards
    atif

    Based on the sun-cmp-mapping file descriptor
    <schema>Rol</schema>
    It is expected a file called Rol.schema is packaged with the ejb.jar. Did you perform capture-schema after you created your table?

  • Print Map with Oracle Map

    I am using Oracle MapViewer 10.1.3.3. In Oracle Map demo, there is a sample for map printing by using style class (noscreen and noprint) for HTML div. However, my application is using Oracle ADF faces, which don't accept a "class" attribute in the tag. Does anyone know how I could print map image only with ADF faces?
    Thanks!

    FOP : [http://www.oracle.com/technology/products/database/application_express/html/configure_printing.html]
    Cocoon: [http://carlback.blogspot.com/2007/03/apex-cocoon-pdf-and-more.html]
    Jasperreports: [http://blog.dunull.org/?page_id=70]
    Thank you,
    Tony Miller
    Webster, TX

  • Problems with Drive Mapping using Oracle Drive

    It seems as though we are having some trouble with oracle drive. The install was a piece of cake, as was the configuration. The problem arises when at times we connect with the proper credentials, Oracle Drive logs the user in, but then the appropriate drive does not map. Nothing shows up under "My computer". The window will launch for the user to use Oracle Drive, but they get error messages including, "Cannot support long file names." (AKA, anything longer than 8 characters).
    Has anyone else come across these issues? Any ideas how to resolve. The issues we get with the long file names seem to only be when the drive doesn't properly map.
    Any thoughts would be helpful.
    Also, I have heard rumors of a Production version of Oracle Drive. Anyone have some news on that?

    We are very pleased to announce the Production Release 10.1.2.0 of Oracle Drive.
    You can download the software from here:
    http://www.oracle.com/technology/software/products/cs/index.html
    Note:
    - This version is production for English only. Other languages are Beta.
    - Oracle Drive 10.1.2.1 for ALL languages is expected soon.
    Find more information on Installation and Configuration as well as a Viewlet that shows Oracle Drive in Action here:
    http://www.oracle.com/technology/products/ias/portal/content_management_10gr2.html
    regards,
    Christian

  • GetObject(int, Map) not working with oracle JDBC

    I'm using Oracle and I'd like to get Date fields as Timestamps since an Oracle date column includes time. I'm trying to use getObject(int, Map) to map the types to Java objects, but it's not working. This is my code:
         public static final HashMap oracleMap = new HashMap();
         static{
              try{
                   oracleMap.put( "DATE", Class.forName("java.sql.Timestamp") );
                   oracleMap.put( "NUMBER", Class.forName("java.math.BigDecimal") );
                   oracleMap.put( "VARCHAR2", Class.forName("java.lang.String") );
                   oracleMap.put( "CLOB", Class.forName("java.sql.Clob") );
                   oracleMap.put( "LONG", Class.forName("java.lang.String") );
              }catch(Exception e){
                   IllegalStateException ise = new IllegalStateException("Oracle type mapping failed.");
                   ise.initCause(e);
                   throw ise;
         }And
                        BASE.println("rs.getClass().getName(): "+rs.getClass().getName());                                        
                        BASE.println("rs.getMetaData().getColumnTypeName(i): "+rs.getMetaData().getColumnTypeName(i));
                        if(rs.getClass().getName().startsWith("oracle")) valObj = rs.getObject(i, DOMTools.oracleMap);                    
                        else valObj = rs.getObject(i);                    
                        BASE.println("valObj.getClass().getName(): "+valObj.getClass().getName());Here's a snippet of my output that illustrates the code not working:
    rs.getClass().getName(): oracle.jdbc.driver.OracleResultSetImpl
    rs.getMetaData().getColumnTypeName(i): DATE
    valObj.getClass().getName(): java.sql.Date
    Anyone know if this is a driver issue? Anyone had luck doing this with Oracle?
    Thanks.

    Well, I'd like it to be java.sql.Timestamp instead of
    java.sql.Date. My actual type is an oracle "DATE".
    Are you saying the Map key for this would be
    "TIMESTAMP" and that by default it maps to
    java.sql.Date? Doesn't seem like that makes sense.I am saying that "TIMESTAMP" is not an oracle value but is instead a JDBC value. Thus it is up to to the driver, not you, to determine what oracle types map to the JDBC type.

  • MTL Table name mapped with IC_ITEM_MST_B in oracle apps r12

    Hi Experts,
    Can anyone suggest me the MTL table name mapped with IC_ITEM_MST_B table in oracle apps with all the columns.
    thanks,

    Response to: I don't see this option "Periodic Sequences in Format" under "Payment Instruction Format" table. I can see only Payment File Information.
    You are maybe missing this (from Implementation Guide):
    "Note: If no payment system is selected or entered for the Payment
    System field in the Payment System subtab of the Update Payment
    Process Profile page, then the Periodic Sequences in Format region is
    not displayed."
    The payment system must be selected at the time you create the profile. It does not seem to allow adding afterwards.
    Edited by: user11974306 on Jan 25, 2013 1:49 PM
    Edited by: user11974306 on Jan 25, 2013 1:49 PM

  • Memory leaks in C++(OCCI ) for OTT generated C++ mapping objects with Oracl

    Our application is in C++ which interfaces with Oracle 10g database using OCCI. The mapping objects are created using OTT. The type of these mappisng objects in PersistentObject. Our development machine(Solaris 8) and database machine are different. So we have installed the "instantclient-basiclite-solaris6432-10.2.0.3-20070101.zip" to use OTT and OCCI on development machine. We are running purify in our application, Purify is reporting memory leaks on these OTT generated POObjects (type used is transient Objects), despite the fact that we are deleting these objects appropriately. .

    Since OTT generated code uses the STL data structures vector list etc.
    STL are not standard across platforms and hence you can ignore these warnings.

  • Mapping LDAP Role in Building Your First Process with Oracle BPM 11g

    I'm working on "Building Your First Process with Oracle BPM 11g" I'm at the end of step where assigns user for the requester. The problem is in identity lookup, "Realm" is empty for Remote_WLServer.
    Servers are up and running. Demo user community has been loaded - I can see the list of users and groups in the administration server under myrealm. We haven't done much since SOA suite 11g installation. I'm probably the first one who uses this. I wonder we have a missing set up? Can you me what's missing? Appreciate your help in advance.

    I get this error message when I clicked gear icon.
    "Server exception is : Connection refused from server"
    Here is the result of testing Remove_WLServer connection. Does this cause the issue?
    Testing JSR-160 Runtime ... failed.
    Cannot establish connection.
    Testing JSR-160 DomainRuntime ... skipped.
    Testing JSR-88 ... skipped.
    Testing JSR-88-LOCAL ... skipped.
    Testing JNDI ... skipped.
    Testing JSR-160 Edit ... skipped.
    Testing HTTP ... success.
    Testing Server MBeans Model ... skipped.
    Testing HTTP Authentication ... success.
    2 of 9 tests successful.
    I have installed JDeveloper 9i, 10g, and 11g in my laptop. SOA is installed on linux.

  • Problems setting up Weblogic Server 9.2 with Oracle AQ

    We are in the process of upgrading from WLS81 to WLS92 and I'm currently trying to set up the environment. We have applications communicating with 3 different JMS-servers; Sonic, WMQ and Oracle AQ. For both Sonic and WMQ the connection seem to work fine. We get an active application, and the beans connecting to queues on those servers reports as 'connected'.
              For Oracle AQ I must be doing something wrong, but I can't for the life of me figure out what it is.
              Our setup is as follows:
              We have a domain-scoped startup class that binds the AQ-queues and a custom QueueConnectionFactory to the WL default context. Giving them a name like aqadapter-AQ_ARE_PING.
              We have defined a System Resource, within which we have defined AQ as a Foreign Server. Within this foreign server we have each queue mapped to the queue-names bound through the Startup class. Likewise we have a qcf mapped to the qcf bound through the startup class.
              The application contains message-driven beans which are supposed to be listening to the AQ-queues. In weblogic-ejb-jar.xml each target queue is mapped within each bean to the same name mapped within the Foreign Server element.
              All this results in MDB's that report as 're-connecting' and 'initializing' and the following message in Server1.stdout for each MDB:
              <Mar 28, 2007 4:27:58 PM CEST> <Warning> <EJB> <BEA-010061> <The Message-Driven EJB: ARE_Ping is unable to connect to the JMS destination: AQ_ARE_PING. The Error was:
              javax.jms.InvalidDestinationException: JMS-125: Invalid Queue specified>
              Any ideas what I am doing wrong? It seems to me that all the settings are as similar as the way they are set up on WLS81 as we could get them.
              Anyone reading this that have done this before? Setting up WLS92 or WLS90 to interact with Oracle AQ?
              Regards,
              Frode Laukus
              Edited by laukus at 03/28/2007 7:53 AM

    Hi Frode
              Have you managed to find a solution to this issue?
              We are trying to do something very similar and encountering all sorts of issues.
              Are you using the DIPSStartup classes to register your queues & qcf with the WL JNDI? I haven't managed to get these classes to work with the AQJmsSession in the latest release of the aqapi13.jar files.
              Hopefully we can sort out a workable solution between us :)
              Andy

  • Domain Value Mapping with Text file

    Hai,
    I have done the Domain value mapping with the xml file to xml file and it is working fine.
    But in the case of Text file to Text file it is not working i.e. If the citiname is the
    first field it is domain value mapping and working fine.
    INPUT:
    Erode, Mahes, 22
    Coimbatore, Veera, 22
    OUTPUT:
    ED, Mahes, 22
    CBE, Veera, 22
    But if I change the Citiname to the second column it is not working the problem is
    it is not working for subsequent columns.
    INPUT:
    Mahes, Erode, 22
    Veera, Coimbatore, 22
    OUTPUT:
    Mahes,, 22
    Veera,, 22
    The input Text files are delimited by "comma" and optionally enclosed by "space".
    Does anyone have attained the DVM in the Text file successfully.
    Please help me.

    Thank you for the reply Abhi
    Text to Text means instead of giving the input file as XML file i am giving the input as a simple .txt file and I also want the output in the .txt format.
    For the text file only it is not working fine in the xml file to xml file it is working fine.
    For your another query I am not providing the correct parameters for "lookup-dvm"
    I am giving the xsl file
    <?xml version="1.0" encoding="UTF-8" ?>
    <?oracle-xsl-mapper
    <!-- SPECIFICATION OF MAP SOURCES AND TARGETS, DO NOT MODIFY. -->
    <mapSources>
    <source type="WSDL">
    <schema location="TextInput1.wsdl"/>
    <rootElement name="Root-Element" namespace="http://TargetNamespace.com/TextInput1"/>
    </source>
    </mapSources>
    <mapTargets>
    <target type="WSDL">
    <schema location="TextOutput1.wsdl"/>
    <rootElement name="Root-Element" namespace="http://TargetNamespace.com/TextInput1"/>
    </target>
    </mapTargets>
    <!-- GENERATED BY ORACLE XSL MAPPER 10.1.3.3.0(build 070615.0525) AT [TUE JUL 15 15:31:55 IST 2008]. -->
    ?>
    <xsl:stylesheet version="1.0"
    xmlns:bpws="http://schemas.xmlsoap.org/ws/2003/03/business-process/"
    xmlns:plt="http://schemas.xmlsoap.org/ws/2003/05/partner-link/"
    xmlns:pc="http://xmlns.oracle.com/pcbpel/"
    xmlns:ehdr="http://www.oracle.com/XSL/Transform/java/oracle.tip.esb.server.headers.ESBHeaderFunctions"
    xmlns:ns0="http://www.w3.org/2001/XMLSchema"
    xmlns:jca="http://xmlns.oracle.com/pcbpel/wsdl/jca/"
    xmlns:hwf="http://xmlns.oracle.com/bpel/workflow/xpath"
    xmlns:imp1="http://TargetNamespace.com/TextInput1"
    xmlns:tns="http://xmlns.oracle.com/pcbpel/adapter/file/TextInput1/"
    xmlns:xp20="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.Xpath20"
    xmlns:xref="http://www.oracle.com/XSL/Transform/java/oracle.tip.xref.xpath.XRefXPathFunctions"
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns:ora="http://schemas.oracle.com/xpath/extension"
    xmlns:ids="http://xmlns.oracle.com/bpel/services/IdentityService/xpath"
    xmlns:orcl="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"
    xmlns:hdr="http://xmlns.oracle.com/pcbpel/adapter/file/"
    xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/file/TextOutput1/"
    exclude-result-prefixes="xsl plt pc ns0 jca imp1 tns hdr ns1 bpws ehdr hwf xp20 xref ora ids orcl">
    <xsl:template match="/">
    <imp1:Root-Element>
    <xsl:for-each select="/imp1:Root-Element/imp1:Leaf-Element">
    <imp1:Leaf-Element>
    <imp1:C1>
    <xsl:value-of select="imp1:C1"/>
    </imp1:C1>
    <imp1:C2>
    <xsl:value-of select='orcl:lookup-dvm("Citinames","Long",imp1:C2,"Short","")'/>
    </imp1:C2>
    <imp1:C3>
    <xsl:value-of select="imp1:C3"/>
    </imp1:C3>
    </imp1:Leaf-Element>
    </xsl:for-each>
    </imp1:Root-Element>
    </xsl:template>
    </xsl:stylesheet>
    Here I am checking the DVM function for the Second column and it is not working and I have given their Inputs and Outputs in my first message .
    I have another question as you told both import and export of the DVM is always in the XML format only so I have a doubt whether the lookup-dvm will be working for xml files only and not for text files.
    But in my case in text file it is working for the first column but not in the subsequent columns.
    Thanks.

  • Using OWB mappings with Oracle CDC/Streams and LCRs

    Hi,
    Has anyone worked with Oracle Streams and OWB? We're looking to leverage Streams to update our data warehouse using Streams to apply changes from the transactional/source DB. At some point we seem to remember hearing that OWB could leverage Streams, perhaps even using the Logical Change Records (LCRs) from Streams as input to mappings?
    Any thoughts much appreciated.
    Thanks,
    Jim Carter

    Hi Jim,
    We've built a fairly complex solution based on streams. We wanted to break up the various components into separate entities so that any network failure or individual component failure wouldn't cause issues for the other components. So, here goes:
    1) The OLTP source database is streaming LCR's to our Datawarehouse where we keep an operational copy of production, updated daily from those streams. This allows for various operational reports to be run/rerun in a given day with the end-of-yesterday picture without impacting the performance on the source system.
    2) Our apply process on the datamart side actually updates TWO copies of data. It does a default apply to our operational copy of production, and each of those tables have triggers that put a second copy of the data into daily partitioned tables. So, yesterday's partitions has only the data that was actually changed yesterday. After the default apply, we walk the Oracle dependency tree to fill in all of the supporting information so that yesterday's partition includes all the data needed to run our ETL queries for that day.
    Example: Suppose yesterday an address for a customer was updated. Streams only knows about the change to the address record, so the automated process would only put that address record into the daily partition. The dependency walk fills in the associated customer, date of birth, etc. data into that partition so that the partition holds all of the related data to that address record for updates without having to query against the complete tables. By the same token, a change to some other customer info will backfill in the adress record for this customer too.
    Now, our ETL queries run against views created against these partitoned tables so that they are only looking at the data for that day (the view s_address joins from our control tables to the partitiond address table so that we are only seeing one day's address records). This means that the ETL is running agains the minimal subset of data required to update dimensions and create facts. It also means that, for example, if there is a problem with the ETL we can suspend running ETL while we fix a problem, and the streaming process will just go on filling partitions until we are ready to re-launch ETL and catch up - one day at a time. We also back up the data mart after each load so that, if we discover an error in ETL logic and need to rebuild we can restore the datamart to a given day and then reprocess the daily partitions in order very simply.
    We have added control fields in those partitioned tables that show which record was inserted/updated/or deleted in production, and which was added by the dependency walk so, if neccessary, our ETL can determine which data elements were the ones that changed. As we do daily updates to the data mart as our finest grain, this process may update a given record in a given partition multiple times so that the status of this record at the end of the day in that daily partition shows the final version of that record for the day. So, for example, if you add an address record an then update it on the same day the partition for that day will show the final updated version of the record, and the control field will show this to be a new inserted record for the day.
    This satisfies our business requirements. Yours may be different.
    We have a set of control tables which manage what partition is being loaded from streams, and which have been loaded via ETL to the datamart. The only limitation is that, of course, the ETL load can only go as far as the last partition completely loaded and closed from streams. And we manage the sizing of this staging system by pruning partitions.
    Now, this process IS complex, and requires a fair chunk of storage, but it provides us with the local daily static copy of the OLTP system for running operational reports against without impacting production, and a guaranteed minimal subset of the OLTP system for speedy ETL runs.
    As for referencing LCRs themselves, we did not go that route due to the dependency issues (one single LTR will almost never include all of the dependant data from which to update a dimension record or build a fact record, so we would have had to constantly link each one with the full data set to get all of that other info).
    Anyway - just thought our approach might give you some ideas as you work out your own approach.
    Cheers,
    Mike

  • Server Hardware for Oracle Database 11g Release 2 with Oracle Spatial

    Hi ,
    We're to set up an Oracle Database 11g Release 2 Enterprise with Oracle Spatial.
    Can you provide me the possible Server Hardware CPU / Memory specs , number of CPUs,type of OS and teh Model, for Spatial which a million users hits the database via a webservice?
    The vendor suggested us SDD instead of HDD, any performance hike on this?
    Budget seems to okay but I think Exadata will be too dear.
    Your insights is much appreciated.Anything relating to setting up a server with spatial is greatly appreciated.
    P/S: Been a programmer and don't knwo much about server hardware specs.

    It depends.
    Seriously - before anyone can offer anything but generalities here you need to really define exactly what you expect the database to deliver.
    In general however, I will throw out these questions and ideas...
    For instance - you say a million users "hit" the database (via a web service). Is that WFS, WMS, KML or ???
    Over - how long? A year? A month? A minute? A million hits over a year spread out evenly is only about 2 per minute.
    And what is a hit? A single random record? Or about 10,000 records in the same spatial area... or? And for each record - are you returning simple point data - or hugely complex polygons with thousands of vertexes each?
    How does this software work? Is it custom - or is it well known? Does it really query the database for every map (I assume it is a map service) feature - or does it read an area once and cache it in the middle tier? Or does it do something really smart and use cached tiles for static data - overlayed with vectors for dynamic data?
    And although you say Oracle Spatial - are you really using Spatial - or just Locator functions (less processing in general)? And if spatial - are you doing raster in the db - or 3D analysis - or other special functionality?
    SSD vs. HDD. - If you can buy a server with more RAM than the data set size (pretty easy these days) - and you do mostly reads (almost always the case for a map app) - buy a small cheap array of RAID'd HDD's - internal to the server is fine. Once the data is read into RAM - the HDD's do basically nothing.
    Server CPU and memory. Amount - see above. CPU Speed (use performance benchmarks - not GHz numbers) AND memory speed (often overlooked) - buy as fast as possible. Why? You pay for licenses by the core (2 cores per license for x64). And HW is MUCH MUCH MUCH cheaper than Oracle licenses. Plan to upgrade HW every year if necessary to avoid buying more licenses (sounds crazy - but it is much cheaper).
    This may seem like a lot - but these questions are just the tip of the iceberg. I have been in charge of spec'ing, building, and programming spatial systems now for about 20 years, so I have a pretty good idea of how to do enterprise scale computing on a Ramon-noodle budget.
    Smart clean software is your friend - don't ask the HW to do anything unless absolutely necessary (cache results and reuse as much as possible), and you can get crazy performance from minimal hardware.
    Bryan

Maybe you are looking for

  • I just downloaded the 6.02 version and clicked on "save the file" in the box that popped up. Was this correct?

    Usually there's a "run" option for a new file. I didn't see one and wonder if the new version will now run on my computer, or if I've just saved it to my save files. I'm a real novice at this.

  • Utorrent shortcut keys stopped working

    All the key shortcuts under the 'File' Menu from uTorrent (run under wine) stopped working. ctrl+o , ctrl+d, ctrl+u and ctrl+n . All the other keyboard shortcuts from the other dropdown menus work; like ctrl+p, ctrl+r, ctrl+g . This problem started a

  • SQL Developer Seems to have messed up SQL Plus & Crystal Connectivity

    Hi all, previously I have been able to connect to a DB via SQL Plus. Also, I have been able to report on it with Crystal Reports XI. I'm not certain, but I believe this has occurred since installing SQL Developer?!?!? Since once I installed, no need

  • System Hiccops

    Yesterday, I was here complaining about issues with Safari not opening and had them resolved through a couple restarts. From there, I thought all was resolved, but today it began acting up again. First, refusing to play songs in iTunes then crashing

  • [SOLVED] how to execute a command on xterm start

    hi guys, i want to execute a command each time i start xterm in that terminal itself. i know about bashrc, but what I want to do is make it exclusively launch when xterm is started. when i want to use other terminals, e.g. terminology etc. i do not w