Question to WOZI - about connection from Dev.2.1 to PO8

After reading the discussion about all kind of installation I could install d2k release 2.1 and PO8.0 into same oracle_home.
Other combinations didn't sucseed.
Both workin independently, but i can't
connect them.
I did exectly like you told but it not working.
I even can't do the database alias using
tcp/ip - only Bequeath(Local Database).
Help me please!!!!!!!!!

This definitely occurs in 1.5.5 - in my particular case, and this is really strange, if one uses Task Manager to shut down SQLDeveloper because it is just taking forever to get a Data view (via the + expand sign on the side of a given admittedly big table, then clicking on the Data tab), SQLDeveloper freezes. Even if I start a new instance of SQLDeveloper, and ask for a Data view again, it freezes - I've waited as much as 1/2 hour, where as in prior days I'd get a response within say 1 min.
I've even uninstalled and re-installed. Same deal. This is what's the strangest by far. How can it be that SQLDeveloper remembers that a long running query was once canceled even after an uninstall / reinstall ? I could not find anything remotely related to this in the Registry after the uninstall either.
[By the way, if on the other hand, I say SELECT * FROM {table_name}, I get an instant response !]

Similar Messages

  • Questions about connection from JDeveloper to BAM 11g TP4

    1. Is it possible to connect from JDeveloper 10.1.3.x to BAM 11g TP4? If yes, is the domain name obligatory for connection?
    2. Is it possible to connect from JDeveloper 11g TP4 (on remote machine) to BAM 11g TP4?
    Thank you for answers
    Edited by: Olga on May 8, 2009 10:46 AM

    Try the WebCenter forum:
    WebCenter Portal

  • I have an Ipad and Iphone but I don't have an Airprint, only a WIFI printer,my question is if I connect the wifi printer to Airport express, can I print from Ipad and Iphone ?

    I have an Ipad and Iphone but I don't have an Airprint at home but only a WIFI printer, my question is if I connect the wifi printer to Airport express, can I print from Ipad and Iphone ? Please help me if anyone knows about this.

    Unfortunately, connecting a non-AirPrint-enabled printer to an AirPort base station's USB port will not make it AirPrint-ready.
    You will need to use another solution, like Printopia, to be able to print to this printer from an iOS device.

  • 2 questions about connection to hyper-v in windows 8

    hi,
    I can connect to my virtual machine from hyper-v manager  ,but when I try the other option
    --- connect with hyper-v virtual machine(option)  I see  on the log on wizard telling me that I don't have  the right to run this job.  take contact with the autorazition policy administrator.
    1---my question is here ,   is connect with hyper-v virtual machine  option   is  specially purposed to connect with  virtual server or it has some onother function!   because I opened the autorazation
    policy on my windows 8 to see if I can edit some policies but I didn't see any function to creat one only it is asking for archieve.
    2--connction trough remote desktop to VM is only purposed from ouside or you can use it  to connect to VM internally too? very short answers wil be enough.
    thanks
    johan
    h.david

    thanks
    ,but wat about connect to hyper-v  virtual machine option in start menu(which is one of  hyper-v manage  programa's option  and  the second is shyper-v manager), why I can not connect through this option
     it gives  me Always an error with  that I don't have the right to run this  conect option through its log on  window and I have to contact my system administrator of autorization policy?
    is this option (connect) outside the hyper-v manager is purposed for the server or I have to change some policies on my computer to can log on to virtual machine from ouside the hyper-v manager with this second option
    johan
    h.david

  • Transport Questions and Questionnaires from Dev to QAS or to PRD

    Question
    How do we transport questions and questionnaires from dev to QAS or to PRD. Do you type all them again or there is process no. What is best process others do
    Answer
    As you must be aware, Questionnaires are updated from a Recruiter start page in the front end which cannot be saved in a change request.
    At least Iu2019m not aware of any other way to move the Questionnaire details across clients.
    I know that it is quite a bit of a task to repeat the activity of maintaining questions and response in questionnaire/s across clients, particularly When you have to maintain big questionnaires (with lots of questions & multiple responses).But As it is , Process templates, Questionnaires have been considered as a recruiters day-to-day administration task in SAP which has to be done from the front end even in live environment
    Any body has soloution ?
    Thanks.
    Saquib
    http://aspirehr.com/HCM_solutions/hcm_solutions.asp

    Hi Vishal,
    As you must be aware, Questionaires are updated from a Recruiter start page in the front end which cannot be saved in a change request.
    Atleast i'm not aware of any other way to move the Questionnnaire details across clients.
    i know that it is quite a bit of a task to repeat the activity of maintaining questions and response in questionnaire/s across clients, particularly When you have to maintain big questionnaires (with lots of questions & multiple responses).But As it is , Process templates, Questionnnaires have been considered as a recruiters day-to-day administration task in SAP which has to be done from the front end even in live environment.
    So if you are a part of the Implementation Consulting team, it is your responsibility to train the core team in doing the activity in live environment. Hope this information helps.
    Best regards
    G Raj

  • Notification about errors during transport from DEV to QAS server.

    Hi Guys.
    Is there any way of alerting the user about the errors that arise when moving the transaction from DEV client to QAS client?
    For Instance,In DEV the program may be working fine and while transporting sometimes errors can arise by not including all the necessary files while moving to QAS server.
    Kind of auto-email facility notifying the user...

    Hi,
    there might be some standard functionality for TMS but if you can't find anything then you can still use BADI CTS_IMPORT_FEEDBACK. The only method if this BADI is called right after import of each transport.
    Cheers

  • Question about recording from VCR to Qosmio

    I want to record my old personal Videos from VCR tapes to DVD using my Qosmio F10. I plug the VCR analogue output via a coax cable into the Qosmio TV aerial input and set up the Qosmio to recognize the VCR input as just another TV channel. It records onto the hard drive using MCE but the quality is poor compared to the original video tape. The colors and sound are ok but the pictures are lacking in detail and clarity.
    Is there a setting on the Qosmio that I can use to improve the quality or is it the type of input I am using. My old VCR only has a coax and scart output and I can't get the scart output to input to the Qosmio.
    Any help would be appreciated

    > Hi Les,
    >
    > I believe the poor image quality is due to the use
    > use of a coax connection from your VCR to your
    > notebook. I use the scart output on my VCR to connect
    > to a data capture card (USB) on my SA30, but I
    > believe your Qosmio already has Composite inputs
    > available.
    >
    > I use a SCART to Composite adapter (available from
    > rom most PC retailers or electrical shops) and find
    > that this gives me excellent image and sound (albeit
    > not at DVD levels of quality).
    >
    > I have never managed to get an S-Video connection
    > ion from my VCR to my SA30 to work other than in
    > Black & White.
    >
    > HTH
    Hi Nicky,
    Thanks for the response. By Composite do you mean RGB output from the VCR?
    The Qosmio has 3 inputs:-
    1. conventional TV coax
    2. S Video
    3. iLink - firewire
    There is no straight RGB input tho'
    Can I use the iLink with a scart output from a VCR? The iLink works fine with my Camcorder.
    tia
    Les

  • Best Practice for Migrating code from Dev to a fresh Test ODI instance

    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines?
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

    948115 wrote:
    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines? If this is the 1st time you are moving to QA, better export/import complete work repositories. If it is not the 1st time then create scenario of specific packages and export/import them to QA. In case of scenario you need not to bother about model/datastores. keep in mind that the logical schema name should be same in QA as used in your DEV.
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?It should be different.
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.It is required to ensure object uniqueness across several work repositories. For more understanding you can refer
    http://docs.oracle.com/cd/E14571_01/integrate.1111/e12643/export_import.htm
    http://odiexperts.com/odi-internal-id/
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?If you are exporting topology then you will get the logical connection and context details. If you are not exporting topology then you need to manually create context and other physical connection/logical connection.
    >
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

  • Was this database migrated from dev or prod?

    We have several environments (e.g., development, staging, test, training, production).  When migrating a production database to a new version of SQL, we also migrate the related environments. Sometimes the developers want a refresh from production in
    development. Sometimes they want the old development database.  A question came up about the source of a development database - was it from dev or production?  If I'm lucky, the information I want is listed in the following query. 
    SELECT rh.[restore_date], rh.[destination_database_name], bs.server_name, bs.database_name
    FROM [msdb].[dbo].[restorehistory] rh
    LEFT JOIN [msdb].[dbo].[backupset] bs ON rh.backup_set_id = bs.backup_set_id
    WHERE bs.server_name <> @@SERVERNAME OR rh.destination_database_name <> bs.database_name
    If I'm not lucky, the history is no longer present.  This is often the case because we don't keep a lot of history.  Is there any other metadata that will help me identify where a database came from?  Should I insure that the most recent
    restore record for an existing db is not cleaned out, no matter how old it is?  Thanks. 
    Randy in Marin

    I was hoping that the source would have been hidden in a system table some place - reliable and no extra work/politics.  I think the answer is looking like, "The information is gone - it can't be done."  Yes, a new business process
    would certainly provide the information.  However, changing organizational behavior is not for the faint of heart. 
    I think it would be easier to update the history cleanup.  This would mean that I can't use msdb.dbo.sp_delete_history as it currently exists.  It has very simple logic.  I might create a version of my own that keeps the records I want. 
    https://connect.microsoft.com/SQLServer/feedbackdetail/view/967074/sp-delete-backuphistory-option-to-exclude-special-old-records
    Not a bad idea.  They could put a "keep" flag on the row and let you set it.
    Or you can copy out the rows you like and maintain your own table with your own logic.
    It would certainly be nice if every backup kept a record of its own source and wrote that to some read-only location in the restored database, but this does not seem to be the case.  Maybe I should file a connect suggestion on that - only my perception
    and experience with connect is to not waste my time, most suggestions are misunderstood and rejected out of hand.
    But what I was suggesting about the business process contents is that it is already there - any dev source database has developer IDs in it that never occur in production, and production databases have only production IDs that you might see are newer than
    anything currently in dev.  So no change in business process would be required, just some knowledge of what's already there and a couple of queries to count some differentiating factors.
    Josh

  • Trusted RFC: "jumping to PRD system from DEV via SOLMAN"

    Hallo!
    I have the following question regarding the Trusted RFC generally in SOLMAN area and better especially within ChaRM.
    We have SOLMAN and 3 satellite systems DEV, QAS and PRD.
    When we generate Trusted RFC connections from SOLMAN to these satellite systems, we become scared about the following scenario:
    A user accesses to DEV system and goes via Trusted RFC to SOLMAN.
    From SOLMAN he chooses the Trusted RFC to PRD system and goes into PRD system.
    The question:
    How can this identification gap/problem be solved?
    Thank you very much indeed!
    H. Thomasson

    Hi Thomasson,
    You can restrict the access of your solman system from your DEV system.
    Disable the trusted relationship in the RFC that points to Solman from Dev system.
    Hope this helps.
    Regards,
    Kiran.

  • Trusted RFC: "jumping to PRD system from DEV"

    Hallo!
    I have the following question regarding the Trusted RFC generally in SOLMAN area and better especially within ChaRM.
    We have SOLMAN and 3 satellite systems DEV, QAS and PRD.
    When we generate Trusted RFC connections from SOLMAN to these satellite systems, we become scared about the following scenario:
    A user accesses to DEV system and goes via Trusted RFC to SOLMAN.
    From SOLMAN he chooses the Trusted RFC to PRD system and goes into PRD system.
    The question:
    How can this identification gap/problem be solved?
    Thank you very much indeed!
    H. Thomasson

    Holger,
    First of all the user that tries to launch the TRUSTED RFC needs to have an active user in the satellite system he/she tries to reach.
    Secondly, this user needs to have the authorization object S_RFCACL assigned to its user.
    You can manually change this TRUSTED RFC to 'Logon Screen' if you like.
    Roel

  • Dropping connection from inbound mail

    About 2 days ago our server stopped being willing to receive messages from one of our client's outbound servers. Sometimes they receive a message bounceback immediately, other times in several hours or overnight. No changes were made to our server or mail configuration and we are successfully receiving messages from numerous other folks to the same address without issue.
    Included below is an excerpt of the mail.log file showing an attempted connect to our server and then "immediate" loss of connection. I have tried restarting the mail service and rebooting the server (10.5.4). Does this look like an issue at our end or their's? If ours, any thoughts on cause and cure? Note also that we're behind a Cisco Pix, but no changes have been there for a considerable period either and the "no fixup protocol smtp 25" is set. We are not running spam filtering.
    Any help or suggestions would be much appreciated!
    Thanks,
    Brian
    Sep 10 10:50:33 myserver postfix/smtpd[24513]: connect from bean.electric.net[72.35.23.29]
    Sep 10 10:50:33 myserver postfix/smtpd[24513]: lost connection after CONNECT from bean.electric.net[72.35.23.29]
    Sep 10 10:50:33 myserver postfix/smtpd[24513]: disconnect from bean.electric.net[72.35.23.29]
    Sep 10 10:50:42 myserver postfix/smtpd[24515]: connect from bean.electric.net[72.35.23.29]
    Sep 10 10:50:42 myserver postfix/smtpd[24515]: lost connection after CONNECT from bean.electric.net[72.35.23.29]
    Sep 10 10:50:42 myserver postfix/smtpd[24515]: disconnect from bean.electric.net[72.35.23.29]
    Sep 10 10:54:02 myserver postfix/anvil[24466]: statistics: max connection rate 4/60s for (smtp:72.35.23.29) at Sep 10 10:50:42
    Sep 10 10:54:02 myserver postfix/anvil[24466]: statistics: max connection count 2 for (smtp:72.35.23.29) at Sep 10 10:50:32
    Sep 10 10:54:02 myserver postfix/anvil[24466]: statistics: max cache size 1 at Sep 10 10:46:27
    Here's a bounceback message as forwarded by the client to a different account...if this helps.
    Subject: Mail delivery failed: returning message to sender
    This message was created automatically by mail delivery software.
    A message that you sent could not be delivered to one or more of its
    recipients. This is a permanent error. The following address(es) failed:
    [email protected]
    retry timeout exceeded
    ------ This is a copy of the message, including all the headers. ------
    ------ The body of the message is 18682 characters long; only the first
    ------ 16384 or so are included here.
    Return-path: <[email protected]>
    Received: from 1Kd7Hz-0008HS-T4 by worden.electric.net with emc1-ok (Exim 4.69)
    (envelope-from <[email protected]>)
    id 1Kd7Hz-0008J3-Un
    for [email protected]; Tue, 09 Sep 2008 10:46:07 -0700
    Received: by emcmailer; Tue, 09 Sep 2008 10:46:07 -0700
    Received: from [66.38.130.1] (helo=cgaowa2.cga-canada.org)
    by worden.electric.net with esmtps (TLSv1:RC4-MD5:128)
    (Exim 4.69)
    (envelope-from <[email protected]>)
    id 1Kd7Hz-0008HS-T4
    for [email protected]; Tue, 09 Sep 2008 10:46:07 -0700
    Received: from CGAEXCH.cga-canada.net ([10.1.10.151]) by
    cgaowa2.cga-canada.net ([10.1.10.155]) with mapi; Tue, 9 Sep 2008 10:46:06
    -0700
    Content-Type: multipart/mixed;
    boundary="000_035F790236EE4A418923913476257A9801D869F7DBCGAEXCHcgacan"
    From: Cleint <[email protected]>
    To: "[email protected]"
    Date: Tue, 9 Sep 2008 10:46:05 -0700
    Subject: FW: New Notices for You
    Thread-Topic: New Notices for You
    Thread-Index: AckSGnOB27CoC/P/TSOWwemQb6Es9wAiVymA
    Message-ID: <[email protected]>
    Accept-Language: en-US
    Content-Language: en-US
    X-MS-Has-Attach:
    X-MS-TNEF-Correlator: <[email protected]>
    acceptlanguage: en-US
    MIME-Version: 1.0
    X-Outbound-IP: 66.38.130.1
    X-Env-From: [email protected]
    X-Virus-Status: Scanned by VirusSMART (c)
    X-Virus-Status: Scanned by VirusSMART (s)
    --000_035F790236EE4A418923913476257A9801D869F7DBCGAEXCHcgacan
    Content-Type: text/plain; charset="us-ascii"
    Content-Transfer-Encoding: quoted-printable
    Message was edited by: Brian Friedrich

    Always fun to answer your own question. It turns out that the application of the "no fixup protocol smtp 25" to the Pix seems to have resolved this issue. Very odd I must say, because the "fixup" had been active since setting the unit up years ago... Nonetheless, mail from this client is coming through now (including the backlog...oh joy).

  • New server and/or CA certificate for connection from custom authentication

    We are running Access Manager version 72005Q4 in the Sun ONE Web Server 6.1SP5 B06/23/2005 container with java build 1.5.0_07-b03. I run a custom authentication module which checks sessions against our university single sign on system which is CAS (from Yale/Jasig). The checks are essentially https calls. All this has been working well for us for the last couple of years.
    I would like to migrate the certificate used on the university CAS system from a Verisign certificate to a wildcard certificate issued by the IPS CA in spain -- these are in most browsers but are not in the standard batch of cacerts CA's -- and are free for .edu domains.
    My other java based authentication plugins (Blackboard, custom apps etc) have worked fine once I import the certificate into the cacerts for the java container, but I'm missing something (obvious probably) about importing this certificate so that my amserver custom authentication module can connect to the CAS server once the CAS server is using the new certificate.
    Could anyone provide guidance on where I need to import this server certificate (or preferably the IPS CA) in order to allow the custom authentication module to work properly? I assume this same problem has been solved by people wishing to connect from the amserver to services with self signed certificates. For some reason I'm finding the debugging unexpectedly difficult, I'll outline some of those details below.
    Relevant things I've tried so far:
    Import both the server cert and the IPS CA into the cacerts of the java container identified in the web server server.xml /usr/jdk/entsys-j2se.
    Import the IPS CA into the web server cert8 style db via the web admin server.
    The debugging has surprised me a bit, as I'm not getting an error that is explicitly SSL related error. It almost seems like the URLConnection object ends up using a HttpURLConnection rather than an HttpsURLConnection and never gives me a cert error, rather a connection refused since there is no non SSL service running on CAS. The same code pointed to the server running the verisign cert works as expected.
    Part of the stack:
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: java.net.ConnectException: Connection refused
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.socketConnect(Native Method)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.Socket.connect(Socket.java:516)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.Socket.connect(Socket.java:466)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.openServer(HttpClient.java:365)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.openServer(HttpClient.java:477)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.<init>(HttpClient.java:214)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.New(HttpClient.java:287)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.New(HttpClient.java:311)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.setNewClient(HttpURLConnection.java:489)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.setNewClient(HttpURLConnection.java:477)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.writeRequests(HttpURLConnection.java:422)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:937)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.yale.its.tp.cas.util.SecureURL.retrieve(Unknown Source)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.yale.its.tp.cas.client.ServiceTicketValidator.validate(Unknown Source)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.fsu.ucs.authentication.providers.CASAMLoginModule.process(CASAMLoginModule.java:86)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at com.sun.identity.authentication.spi.AMLoginModule.wrapProcess(AMLoginModule.java:729)
    The relevent bit of code from the SecureURL.retrieve looks as follows:
    URL u = new URL(url);
    if (!u.getProtocol().equals("https"))
    throw new IOException("only 'https' URLs are valid for this method");
    URLConnection uc = u.openConnection();
    uc.setRequestProperty("Connection", "close");
    r = new BufferedReader(new InputStreamReader(uc.getInputStream()));
    String line;
    StringBuffer buf = new StringBuffer();
    while ((line = r.readLine()) != null)
    buf.append(line + "\n");
    return buf.toString();
    } finally { ...
    The fact that this same code in other authentication modules running outside the amserver (in other web containers as well, tomcat and resin for example) running java 1.5 works fine with the new CA, as well as with self signed certs that I've imported into the appropriate cacerts file leads me to believe that I'm either importing the certificate into the wrong store, or that there is some additional step needed for the amserver in the Sun Web container.
    Thank you very much for any insights and help,
    Ethan

    I thought since this has had a fair number of views I would give an update.
    I have been able to confirm that the custom authentication module is using the cert8 db defined in the AMConfig property com.iplanet.am.admin.cli.certdb.dir as documented. I do seem to have a problem using the certificate to make outgoing connections, even though the certificate verifies correctly for use as a server certificate. This is likely a question for a different forum, but just to show what I'm looking at:
    root@jbc1 providers#/usr/sfw/bin/certutil -V -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u V
    certutil: certificate is valid
    root@jbc1 providers#/usr/sfw/bin/certutil -V -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u C
    certutil: certificate is invalid: Certificate type not approved for application.
    root@jbc1 providers#/usr/sfw/bin/certutil -M -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -t uP,uP,uP
    root@jbc1 providers#/usr/sfw/bin/certutil -V -l -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u C
    FSU Wildcard Certificate : Certificate type not approved for application.
    So it could be that I don't understand how to use the certutiil to get the permissions I want, or it could be that using the same certificate for both server and client functions is not supported -- though you can see why this would be a common case with wildcard certificates.
    BTW for those interested, it did seem to be the case that when the certificate failure occurred that the attempt was then made by the URLConnection to bind to port 80 in cleartext even though the URL was clearly https. I'm sure this was just an attempt to help out misformed URL, but it seemed that the URLConnection implementation in the amserver would swapped traffic over cleartext if that port had been open on the server I was making the https connection to; that seems dangerous to me, I would not have wanted it to quietly work that way exposing sensitive information to the network.
    This was why I was getting back a connection refused instead of a certificate exception. The URLConnection implementation used by the amserver is defined by java.protocol.handler.pkgs=com.iplanet.services.comm argument passwd to the JVM, and I imagine this is done because the amserver pre-dates the inclusion of the sun.net.www.protocol handlers, but I don't know, there maybe reasons why the amserver wants it own handler. I only noticed that this is what was going on when I as casting the httpsURLConnection objects to other types trying to diagnose the certificate problem. I would be interested in hearing if anyone knows if there is a reason not to use sun.net.www.protocol with the amserver.
    After switching to the sun.net.www.protocol handler I was able to get my certificate errors rather than the "Connection Refused" which is what lead me to the above questions about certutil.

  • Sql server Configuration option when moving packages from dev env to production env

    hi folks:
      Our SSIS engine is SQL2008R2 and I am in charge of ssis package development and deployment from dev env to production env. 
      This is what I've done using configurations to move packages from dev to production.
      1. on my local machine, I've created a database called SSIS_Config and a table that stores all 
      configration settings
      CREATE TABLE [dbo].[SSIS Configurations]
    ConfigurationFilter NVARCHAR(255) NOT NULL,
    ConfiguredValue NVARCHAR(255) NULL,
    PackagePath NVARCHAR(255) NOT NULL,
    ConfiguredValueType NVARCHAR(20) NOT NULL
    2. on dev ssis server, there is the same table [dbo].[SSIS Configurations] on SSIS_Config db which stores all configurations
    Once package has been deployed to dev env and run with success, I'll move it from dev to prod
    3. On Prod ssis server, there is the same table [dbo].[SSIS Configurations]  on SSIS_COnfig db which stores all configurations
    once ssis pkg has been deployed using deployment manifest, it runs without errors as all production db connection
    will be updated manually in the table [dbo].[SSIS Configurations] 
    Our production env is unique as it's completely locked down and the only way to connect is through remote in session. 
    At this moment, I am maintaining three [dbo].[SSIS Configurations]  tables : on my local machine, on SSIS dev server and on SSIS prod server. 
    This works fine so far as I am the sole developer... Soon, we will have more developers joined to develop ssis package .
    I am wondering if there is anyway to cut the table maintained on my local machine and only use ones on dev server and prod server ? 
    I've tried using the table on dev server to disperse connectionstrings, it works file on dev env.  However, when I deployed to product server,
    since there is no sql connection between dev and prod, the configuration information can not be retrieved and theirfore package is failed.
    Any ideas on how to move packages from local to dev to prod env.? 
     Thanks
     Hui
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

    Thanks Nearby BI guy. Am I correct  that  in your SSIS packages, there is only one connection manager which gets ConnectionStr  populated from package configurations which is .xmlconfig
    file and  you then use expressions to populate  connectionStrs of other connection managers through the ssisconfiguration tables? 
    Also for the one on Production environment, the contents of .xmlconfig has to be manually
    changed in order to match the one on production env.
    Is that correct?
    I am thinking about using environment variables to point to the Config Database on each
    local/dev/prod environment, but some environments may have strict policy on the usage of environment variables.
     thanks
     Hui
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

  • Remote tuxedo domain rejects connection from client only Tuxedo JCA Adapter

    I am trying to use a client only configured Oracle Tuxedo JCA Adapter 11.1.1.2.1 to connect to a remote tuxedo 10.3 domain. The connector is deployed to a JDeveloper 10.1.3.4 embedded OC4J container. The connector is failing silently when attempting to establish a connection with the remote domain. Locally, the JCA Adapter ntrace logs the following:
    1/20/11:9:41:49 PM:10:TRACE[DMLocalAccessPoint,DMLocalAccessPoint]> (ypjspNQ5QIPKmOyk1DlAgw==)
    1/20/11:9:41:49 PM:10:DBG[DMLocalAccessPoint,DMLocalAccessPoint]_useSSL = false
    1/20/11:9:41:49 PM:10:TRACE[DMLocalAccessPoint,DMLocalAccessPoint]< return(10)
    1/20/11:9:41:49 PM:10:INFO[TuxedoAdapterSupervisor,createLocalAccessPoint]TJA_0233:Info: Default local access point for factory null created, access point id ypjspNQ5QIPKmOyk1DlAgw==.
    1/20/11:9:41:49 PM:10:DBG[TuxedoAdapterSupervisor,createLocalAccessPoint]features = 159
    1/20/11:9:41:49 PM:10:TRACE[TuxedoAdapterSupervisor,startListeners]> ()
    1/20/11:9:41:49 PM:10:TRACE[TuxedoAdapterSupervisor,startListeners]< (20) return
    1/20/11:9:41:49 PM:10:TRACE[DMSession,DMSession]> (__sess_0_0)
    1/20/11:9:41:49 PM:10:DBG[DMSession,myInit]_lap_name:ypjspNQ5QIPKmOyk1DlAgw==
    1/20/11:9:41:49 PM:10:DBG[DMSession,myInit]_rap_name:e1tst_tdtux02
    1/20/11:9:41:49 PM:10:DBG[DMSession,myInit]_pro_name:__default_session_profile__
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]got _lap: com.oracle.tuxedo.adapter.config.DMLocalAccessPoint@1f6bc1a
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]got _rap: com.oracle.tuxedo.adapter.config.DMRemoteAccessPoint@1b75e54
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]got _pro: com.oracle.tuxedo.adapter.config.DMSessionProfile@191f64b
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]sec = NONE
    1/20/11:9:41:49 PM:10:TRACE[DMSession,DMSession]< return(60)
    1/20/11:9:41:49 PM:10:INFO[TuxedoAdapterSupervisor,createDefaultSession]TJA_0193:INFO: Default session created between LocalAccessPoint ypjspNQ5QIPKmOyk1DlAgw== and RemoteAccessPoint e1tst_tdtux02.
    1/20/11:9:41:49 PM:10:TRACE[DMSession,DMSession]> (__sess_0_1)
    1/20/11:9:41:49 PM:10:DBG[DMSession,myInit]_lap_name:ypjspNQ5QIPKmOyk1DlAgw==
    1/20/11:9:41:49 PM:10:DBG[DMSession,myInit]_rap_name:e1tst_tdtux01
    1/20/11:9:41:49 PM:10:DBG[DMSession,myInit]_pro_name:__default_session_profile__
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]got _lap: com.oracle.tuxedo.adapter.config.DMLocalAccessPoint@1f6bc1a
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]got _rap: com.oracle.tuxedo.adapter.config.DMRemoteAccessPoint@1c0f654
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]got _pro: com.oracle.tuxedo.adapter.config.DMSessionProfile@191f64b
    1/20/11:9:41:49 PM:10:DBG[DMSession,DMSession]sec = NONE
    1/20/11:9:41:49 PM:10:TRACE[DMSession,DMSession]< return(60)
    1/20/11:9:41:49 PM:10:INFO[TuxedoAdapterSupervisor,createDefaultSession]TJA_0193:INFO: Default session created between LocalAccessPoint ypjspNQ5QIPKmOyk1DlAgw== and RemoteAccessPoint e1tst_tdtux01.
    1/20/11:9:41:49 PM:10:TRACE[TuxedoAdapterSupervisor,registerClientSideResourceAdapter]create default import
    1/20/11:9:41:49 PM:10:TRACE[ServiceManager,registerImportedService]> (*)
    1/20/11:9:41:49 PM:10:INFO[,]factory = null
    1/20/11:9:41:49 PM:10:INFO[,]name = *
    1/20/11:9:41:49 PM:10:INFO[,]iname = *
    1/20/11:9:41:49 PM:10:TRACE[ServiceManager,registerImportedService]register Default Import
    1/20/11:9:41:49 PM:10:TRACE[Route,Route]> (*)
    I can't determine if there are any problems from these log entries, but the remote tuxedo domain logs the following in the ULOG:
    155138.tdtux01!GWTDOMAIN.3495.4.0: LIBGWT_CAT:1073: ERROR: Unable to obtain remote domain id (ypjspNQ5QIPKmOyk1DlAgw==) information from shared memory
    155138.tdtux01!GWTDOMAIN.3495.4.0: LIBGWT_CAT:1509: ERROR: Error occurred during security negotiation - closing connection
    My understanding is that the client only configuration should connect to a remote tuxedo domain as an anonymous client instead of a peer tuxedo domain, but the remote tuxedo gateway domain listener is acting like the client has to be configured in its dmconfig file before it will allow the connection request. Is there a different kind of listener the client only configuration should connect to instead of the tuxedo gateway domain listener? How can a remote tuxedo domain accept a connection from an anonymous client if the client must first be specified in the remote domain's dmconfig file? Is this a tuxedo 11g only feature? I'm trying to connect to a tuxedo 10.3 server.
    The local ra.xml is reproduced here:
    <?xml version="1.0" encoding="UTF-8"?>
    <connector xmlns="http://java.sun.com/xml/ns/j2ee"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/connector_1_5.xsd"
    version="1.5">
    <display-name>Tuxedo JCA Adapter</display-name>
    <vendor-name>Oracle</vendor-name>
    <eis-type>Tuxedo</eis-type>
    <resourceadapter-version>11gR1(11.1.1.2.1)</resourceadapter-version>
    <license>
    <description>Tuxedo SALT license</description>
    <license-required>false</license-required>
    </license>
    <resourceadapter>
    <resourceadapter-class>com.oracle.tuxedo.adapter.TuxedoClientSideResourceAdapter</resourceadapter-class>
    <config-property>
    <config-property-name>debugConfig</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <config-property-name>traceLevel</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>100000</config-property-value>
    </config-property>
    <config-property>
    <config-property-name>xaAffinity</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <config-property-name>remoteAccessPointSpec</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>//tdtux01:9601/domainId=e1tst_tdtux01,//tdtux02:9601/domainId=e1tst_tdtux02</config-property-value>
    </config-property>
    <outbound-resourceadapter>
    <connection-definition>
    <managedconnectionfactory-class>com.oracle.tuxedo.adapter.spi.TuxedoManagedConnectionFactory</managedconnectionfactory-class>
    <connectionfactory-interface>javax.resource.cci.ConnectionFactory</connectionfactory-interface>
    <connectionfactory-impl-class>com.oracle.tuxedo.adapter.cci.TuxedoConnectionFactory</connectionfactory-impl-class>
    <connection-interface>javax.resource.cci.Connection</connection-interface>
    <connection-impl-class>com.oracle.tuxedo.adapter.cci.TuxedoJCAConnection</connection-impl-class>
    </connection-definition>
    <transaction-support>NoTransaction</transaction-support>
    <authentication-mechanism>
    <authentication-mechanism-type>BasicPassword</authentication-mechanism-type>
    <credential-interface>javax.resource.spi.security.PasswordCredential</credential-interface>
    </authentication-mechanism>
    <reauthentication-support>false</reauthentication-support>
    </outbound-resourceadapter>
    </resourceadapter>
    </connector>
    Thanks for any help.
    Steve

    Looks like this is an RTFM question. From:
    [http://download.oracle.com/docs/cd/E18050_01/jca/docs11gr1/users/jca_usersguide.html]
    Is the following:
    Dynamic RemoteAccessPoint (RAP) Insertion
    In order to make default LocalAccessPoint to work, Oracle Tuxedo GWTDOMAIN gateway configuration is required in order to make this simplified /Domain configuration to work.
    GWTDOMAIN gateway must be modified to allow Dynamic RemoteAccessPoint (RAP) Registration. If DYNAMIC_RAP is set to YES, it will also update the in-memory database of the status of the connection from those dynamically registered RAP. If the connection from those dynamically registered RAP lost then the information about that RAP will be removed from the SHM database.
    GWADM must be modified to process the DM MIB correctly to reflect the connection status of those dynamically registered RAP. When the connection from those dynamically registered RAP lost their entries in the SHM database will also be removed so that the DM MIB query can return the connection status correctly.
    The dynamically registered RAP will be added to /DOMAIN configuration permanently. Their existence will only be known when the Session is established. Their existence will be lost when the connection is lost.
    The DM_CONNECTION Oracle Tuxedo /Domain DMIB call returns all the connected dynamically registered RemoteAccessPoint. All other dynamically registered RemoteAccessPoint that are not connected will not be shown.
    The OPENCONNECTION DMIB request will not be supported to connect to those dynamically registered RAP.
    The CLOSECONNECTION Oracle Tuxedo /DMIB request closes the connection and remove the session from those dynamically registered RemoteAccessPoint, and returns its connection status as 'UNKNOWN.
    The PERSISTENT_DISCONNECT type of CONNECTION_POLICY will be honored that means when PERSISTENT_DISCONNECT is in effect all connections request from any RAP, whether they are dynamically or non-dynamically registered, will be rejected.
    I must have overlooked this section when reading it. Looks like I've got more configuration to do.
    Thanks,
    Steve

Maybe you are looking for

  • New ipod nano not recognized by itunes

    hi, okay today i got a new ipod nano, i plugged it into the computer and the computer recognized it but itunes didn't. I tried troubleshoot and reinstalling itunes and nothing. My ipod says connected but thats it and it freezes when i reconnected to

  • Sender RFC Adapter Issue

    Hi , I am using the following scenarion  (SAP ECC)Sender RFC>PI 7.1->Receiver JDBC(ThirdParty). The issue is whenever the the ECC system is down and is up again , the RFC sender messages are getting struck in SM58(tRFC queue).What I manually do is ,

  • Adobe photoshop elements 8  and premier elements 8 will not load with security numbers.

    purchase HP computer in 2011. hard drive crashed. replaced for free by HP. but software never reloaded on computer. computer now works and I cannot load adobe software that was  never used. Office did load with help of computer geek today.

  • Infotype - Internal Control

    Hi Gurus, I stored personnel card no. of the employees of our organization in Infotype 32. I am able to get the report of the same thru Adhoc Query. Sometimes it might happen that particular employee loses his personnel card. The card no is again upd

  • IPhoto events and Aperture projects

    What happens to iPhoto events and Aperture projects in Photos? I have read that Events and Projects are not available in the new Photos app. My library is entirely organised this way. If Photos is organised just by date, then will all my years of sor