Assistance with a CONNECT BY query required
Hi all
I'm using Oracle 9201, and I'd like to run a CONNECT BY in the following situation:
A transaction happens on a card, in a store
That store has a parent store
The parent store has a parent store too
.. the hierarchy is 4 high:
So we have:
GrandgrandParent store
Grandparent store
Parent store
Transaction in store
I want to get a result set that gives:
The name of the grandgrandparent store, the name of the transaction store, (some other details of the transaction)
The tables are arranged:
trans
store_number, card_number, amount
stores
store_number, parent_store_number
The following SQL will give me a stores hierarchy:
SELECT
FROM
stores
WHERE
store_is_active = 'YES' and
LEVEL <= 4
START WITH
store_number = 'ABCD'
CONNECT BY
PRIOR parent_store_number = store_numberBut I cannot work out how to make oracle take 10 transactions occurring on card X, get the store ID from that card, and then chase up the tree to the top store.
Can someone help me out? All the SQLs I've tried, such as:
SELECT
FROM
stores, trans
WHERE
stores.store_is_active = 'YES' and
LEVEL <= 4 and
trans.card_number = 'X'
START WITH
stores.store_number = trans.store_number
CONNECT BY
PRIOR parent_store_number = store_numberWhich logically make sense to me, take forever to run; oracle launches into performing a full tree build for every store, then a full scan of millions of transactions. On its own, select * from trans where card_number = 'X' uses an index and completes in fractions of a second. Similarly, if I pick any one of the store_number I get from looking at trans, and use it as a hardcoded value in the START WITH, the hierarchy is built in fractions of a second.
I have this query written another way, that literally joins the stores table in 4 times (one for each level) and that completes quite quickly. We have other conenct by queries that perform well, though these start with a common grandgrandparent store and work down, then go into the transactions table looking for all transactions happening within that retail chain. THis report has to work the other way to see what cards are being used within which chains
Thanks in advance for any assistance!
cj
Example of problem if you leave out the joining clause from your where condition...
SQL> ed
Wrote file afiedt.buf
1 select rpad(' ',(level-1)*2,' ')||ename||' ('||dname||')'
2 from emp, dept
3 where emp.deptno = dept.deptno
4 start with mgr is null
5* connect by mgr = prior empno
SQL> /
RPAD('',(LEVEL-1)*2,'')||ENAME||'('||DNAME||')'
KING (ACCOUNTING)
CLARK (ACCOUNTING)
MILLER (ACCOUNTING)
JONES (RESEARCH)
SCOTT (RESEARCH)
ADAMS (RESEARCH)
FORD (RESEARCH)
SMITH (RESEARCH)
BLAKE (SALES)
ALLEN (SALES)
WARD (SALES)
MARTIN (SALES)
TURNER (SALES)
JAMES (SALES)
14 rows selected.
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 3647870716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 14 | 420 | 4 (0)| 00:00:01 |
|* 1 | CONNECT BY WITH FILTERING | | | | | |
|* 2 | FILTER | | | | | |
| 3 | COUNT | | | | | |
| 4 | NESTED LOOPS | | 14 | 420 | 4 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | DEPT | 4 | 52 | 3 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| EMP | 4 | 68 | 1 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | DEPT_IDX | 5 | | 0 (0)| 00:00:01 |
|* 8 | HASH JOIN | | | | | |
| 9 | CONNECT BY PUMP | | | | | |
| 10 | COUNT | | | | | |
| 11 | NESTED LOOPS | | 14 | 420 | 4 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | DEPT | 4 | 52 | 3 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID| EMP | 4 | 68 | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DEPT_IDX | 5 | | 0 (0)| 00:00:01 |
| 15 | COUNT | | | | | |
| 16 | NESTED LOOPS | | 14 | 420 | 4 (0)| 00:00:01 |
| 17 | TABLE ACCESS FULL | DEPT | 4 | 52 | 3 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | EMP | 4 | 68 | 1 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | DEPT_IDX | 5 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("MGR" IS NULL)
2 - filter("MGR" IS NULL)
7 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
8 - access("MGR"=NULL)
14 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
19 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
Statistics
1 recursive calls
0 db block gets
37 consistent gets
0 physical reads
0 redo size
819 bytes sent via SQL*Net to client
396 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
14 rows processed
SQL> ed
Wrote file afiedt.buf
1 select rpad(' ',(level-1)*2,' ')||ename||' ('||dname||')'
2 from emp, dept
3 --where emp.deptno = dept.deptno
4 start with mgr is null
5* connect by mgr = prior empno
SQL> /
RPAD('',(LEVEL-1)*2,'')||ENAME||'('||DNAME||')'
KING (ACCOUNTING)
JONES (ACCOUNTING)
SCOTT (ACCOUNTING)
ADAMS (ACCOUNTING)
ADAMS (RESEARCH)
ADAMS (SALES)
ADAMS (OPERATIONS)
FORD (ACCOUNTING)
SMITH (ACCOUNTING)
<cut>
MILLER (RESEARCH)
MILLER (SALES)
MILLER (OPERATIONS)
1076 rows selected.
Elapsed: 00:00:01.54
Execution Plan
Plan hash value: 3369745104
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 56 | 1344 | 11 (0)| 00:00:01 |
|* 1 | CONNECT BY WITH FILTERING| | | | | |
|* 2 | FILTER | | | | | |
| 3 | COUNT | | | | | |
| 4 | MERGE JOIN CARTESIAN | | 56 | 1344 | 11 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | DEPT | 4 | 40 | 3 (0)| 00:00:01 |
| 6 | BUFFER SORT | | 14 | 196 | 8 (0)| 00:00:01 |
| 7 | TABLE ACCESS FULL | EMP | 14 | 196 | 2 (0)| 00:00:01 |
|* 8 | HASH JOIN | | | | | |
| 9 | CONNECT BY PUMP | | | | | |
| 10 | COUNT | | | | | |
| 11 | MERGE JOIN CARTESIAN | | 56 | 1344 | 11 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | DEPT | 4 | 40 | 3 (0)| 00:00:01 |
| 13 | BUFFER SORT | | 14 | 196 | 8 (0)| 00:00:01 |
| 14 | TABLE ACCESS FULL | EMP | 14 | 196 | 2 (0)| 00:00:01 |
| 15 | COUNT | | | | | |
| 16 | MERGE JOIN CARTESIAN | | 56 | 1344 | 11 (0)| 00:00:01 |
| 17 | TABLE ACCESS FULL | DEPT | 4 | 40 | 3 (0)| 00:00:01 |
| 18 | BUFFER SORT | | 14 | 196 | 8 (0)| 00:00:01 |
| 19 | TABLE ACCESS FULL | EMP | 14 | 196 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("MGR" IS NULL)
2 - filter("MGR" IS NULL)
8 - access("MGR"=NULL)
Statistics
1 recursive calls
0 db block gets
50 consistent gets
0 physical reads
0 redo size
34680 bytes sent via SQL*Net to client
1177 bytes received via SQL*Net from client
73 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
1076 rows processed
SQL>
Similar Messages
-
[8i] Need help with hierarchical (connect by) query
First, I'm working in 8i.
My problem is, I keep getting the error ORA-01437: cannot have join with CONNECT BY.
And, the reason I get that error is because one of the criteria I need to use to prune some branches with is in another table... Is there anyway to work around this? I tried an in-line view (but got the same error). I thought about using the connect by query as an in-line view and filtering off what I don't want that way, but I'm not sure how to filter out an entire branch...
Here is some simplified sample data:
CREATE TABLE bom_test
( parent CHAR(25)
, component CHAR(25)
, qty_per NUMBER(9,5)
INSERT INTO bom_test
VALUES ('ABC-1','101-34',10);
INSERT INTO bom_test
VALUES ('ABC-1','A-109-347',2);
INSERT INTO bom_test
VALUES ('ABC-1','ABC-100G',1);
INSERT INTO bom_test
VALUES ('ABC-1','1A247G01',2);
INSERT INTO bom_test
VALUES ('ABC-100G','70052',18);
INSERT INTO bom_test
VALUES ('ABC-100G','M9532-278',5);
INSERT INTO bom_test
VALUES ('1A247G01','X525-101',2);
INSERT INTO bom_test
VALUES ('1A247G01','1062-324',2);
INSERT INTO bom_test
VALUES ('X525-101','R245-9010',2);
CREATE TABLE part_test
( part_nbr CHAR(25)
, part_type CHAR(1)
INSERT INTO part_test
VALUES ('ABC-1','M');
INSERT INTO part_test
VALUES ('101-34','P');
INSERT INTO part_test
VALUES ('A-109-347','P');
INSERT INTO part_test
VALUES ('ABC-100G','M');
INSERT INTO part_test
VALUES ('1A247G01','P');
INSERT INTO part_test
VALUES ('70052','P');
INSERT INTO part_test
VALUES ('M9532-278','P');
INSERT INTO part_test
VALUES ('X525-101','M');
INSERT INTO part_test
VALUES ('1062-324','P');
INSERT INTO part_test
VALUES ('R245-9010','P');This is the basic query (with no pruning of branches):
SELECT LEVEL
, b.component
, b.parent
, b.qty_per
FROM bom_test b
START WITH b.parent = 'ABC-1'
CONNECT BY PRIOR b.component = b.parentThe query above gives the results:
LEVEL COMPONENT PARENT QTY_PER
1.000 101-34 ABC-1 10.000
1.000 A-109-347 ABC-1 2.000
1.000 ABC-100G ABC-1 1.000
2.000 70052 ABC-100G 18.000
2.000 M9532-278 ABC-100G 5.000
1.000 1A247G01 ABC-1 2.000
2.000 X525-101 1A247G01 2.000
3.000 R245-9010 X525-101 2.000
2.000 1062-324 1A247G01 2.000
9 rows selected....but I only want the branches (children, grandchildren, etc.) of part type 'M'.
e.g.:
LEVEL COMPONENT PARENT QTY_PER
1.000 101-34 ABC-1 10.000
1.000 A-109-347 ABC-1 2.000
1.000 ABC-100G ABC-1 1.000
2.000 70052 ABC-100G 18.000
2.000 M9532-278 ABC-100G 5.000
1.000 1A247G01 ABC-1 2.000Any suggestions?Hi,
user11033437 wrote:
First, I'm working in 8i.
My problem is, I keep getting the error ORA-01437: cannot have join with CONNECT BY.
And, the reason I get that error is because one of the criteria I need to use to prune some branches with is in another table... Is there anyway to work around this? I tried an in-line view (but got the same error). Post your query. It's very hard to tell what you're doing wrong if we don't know what you're doing.
...but I only want the branches (children, grandchildren, etc.) of part type 'M'.
e.g.:
LEVEL COMPONENT PARENT QTY_PER
1.000 101-34 ABC-1 10.000
1.000 A-109-347 ABC-1 2.000
1.000 ABC-100G ABC-1 1.000
2.000 70052 ABC-100G 18.000
2.000 M9532-278 ABC-100G 5.000
1.000 1A247G01 ABC-1 2.000
You mean you want don't want the descendants (children, grandchildren, etc.) of any component whose part_type is not 'M'.
The part_type of the component itself doesn't matter: component '101-34' is included, even though its part_type is 'P', and component 'X525-101' is excluded, even though its part_type is 'M'.
>
Any suggestions?Sorry, I don't have an Oracle 8.1 database at hand now. All three of the queries below get the correct results in Oracle 10.2, and I don't believe they do anything that isn't allowed in 8.1.
You can't do a join and CONNECT BY in the same query on Oracle 8.1.
I believe you can do one first, then the other, using in-line views. The frist two queries do the join first.
-- Query 1: Join First
SELECT LEVEL
, component
, parent
, qty_per
FROM ( -- Begin in-line view to join bom_test and part_test
SELECT b.component
, b.parent
, b.qty_per
, p.part_type AS parent_type
FROM bom_test b
, part_test p
WHERE p.part_nbr = b.parent
) -- End in-line view to join bom_test and part_test
START WITH parent = 'ABC-1'
CONNECT BY parent = PRIOR component
AND parent_type = 'M'
;Query 2 is very much like Query 1, but it does more filtering in the sub-query, returning only rows hose part_type or whose parent's part_type is 'M". Your desired result set will be a tree taken entirely from this set. Query 2 may be faster, because the sub-query is more selective, but then again, it may be slower because it has to do an extra join.
{code}
-- Query 2: Join first, prune in sub-query
SELECT LEVEL
, component
, parent
, qty_per
FROM ( -- Begin in-line view to join bom_test and part_test
SELECT b.component
, b.parent
, b.qty_per
, p.part_type AS parent_type
FROM bom_test b
, part_test p
, part_test c
WHERE p.part_nbr = b.parent
AND c.part_nbr = b.component
AND 'M' IN (c.part_type, p.part_type)
) -- End in-line view to join bom_test and part_test
START WITH parent = 'ABC-1'
CONNECT BY parent = PRIOR component
AND parent_type = 'M'
{code}
Query 3, below, takes a completely different approach. It does the CONNECT BY query first, then does a join to see what the parent's part_type is. We can easily cut out all the nodes whose parent's part_type is not 'M', but that will leave components like 'R245-9010' whose parent has part_type 'M', but should be excluded because its parent is excluded. To get the correct results, we can do another CONNECT BY query, using the same START WITH and CONNECT BY conditions, but this time only looking at the pruhed results of the first CONNECT BY query.
{code}
-- Query 3: CONNECT BY, Prune, CONNECT BY again
SELECT LEVEL
, component
, parent
, qty_per
FROM ( -- Begin in-line view of 'M' parts in hierarchy
SELECT h.component
, h.parent
, h.qty_per
FROM ( -- Begin in-line view h, hierarchy from bom_test
SELECT component
, parent
, qty_per
FROM bom_test
START WITH parent = 'ABC-1'
CONNECT BY parent = PRIOR component
) h -- End in-line view h, hierarchy from bom_test
, part_test p
WHERE p.part_nbr = h.parent
AND p.part_type = 'M'
) -- End in-line view of 'M' parts in hierarchy
START WITH parent = 'ABC-1'
CONNECT BY parent = PRIOR component
{code}
I suspect that Query 3 will be slower than the others, but if the CONNECT BY query is extremely selective, it may be better.
It would be interesting to see your findings using the full tables. Please post your observations and the explain plan output.
As usual, your message is a model of completeness and clarity:
<ul>
<li>good sample data,
<li> posted in a way people can use it,
<li>clear results,
<li> good explanation
<li> nciely formatted code
</ul>
Keep up the good work! -
Assistance with a query of the HRMS tables
I need some assistance with a query I am trying to run. Here are the two tables I am trying to join:
PER_ALL_POSITIONS
PER_ALL_PEOPLE_F
What I am trying to accomplish is to obtain the First_Name, Last Name from PERALL_PEOPLE_F table and then join that to the PER_ALL_POSITIONS table to obtain a unique listing of positions. However what I need assistance with is identifying how to join the two tables. I know the primary key on PER_ALL_PEOPLE_F is Person_ID but this value does not appear in PER_ALL_POSITIONS table. Any advice someone could give me would be greatly appreciated. :)you need to go from per_all_people_f to per_all_assignments_f, then to per_all_positions.
-
Adhoc Query Requirement with Multiple Data Source
Hi All,
I have a Adhoc Query Requirement with Multiple Data Source. Is there any way to achive it. Other than Resultant set and bring into Model.
Thanks
SSYou can compare stuff in the EL, but I don't think this is what you need.
You can just use Java code in the backing bean class for all the business logic. You can use DAO classes for database access logic. Finally for displaying you can use the JSF tags such as h:outputText. -
Greetings,
I am hoping to get some assistance with what I am thinking should be a simple content export. I am running UCM 11g with Folders_g. I have some root folder named rootfolder which has 30 or so content items directly in it and then 100 or so sub-folders. I have already migrated all the subfolders with their content successfully. Now I only need to export the remaining 30 or so items directly in the rootfolder. This sounds simple, but when I select that root folder using Folder Structure Archive, even though I do not place a check mark in any of the subfolders, the resultant export took 4 hours had over 160,000 items. I am only expecting these 30 or so items. How can I form a query to get only these? Idea i am thinking of is maybe to query for content which has parent folder id of rootfolder?
Thank you in advance for any help,
-KayceeThank you Jonathan - I wasn't receiving email notifications for this thread, so i wasn't aware that you had replied.
After "playing" around, I did end up with a successful export of the source folder's 30 items by using this query: xCollectionID = valueforfolder and dReleaseState = 'Y'
This did work, but there still remains for me some confusion around the use of Folder Structure Archive. I guess I was expecting that when using Folder Archive Configuration only, by checking the box of my intended parent folder and no other sub-folders under that box, that it should have exported it's 30 content items. And not the content items of the +/-100 sub-folders. I did not expect to have to add a content export query to refine/achieve the result. Simply, when using folder structure archive, shouldn't the export of content only pertain to the folder's checkbox you have marked?
Thanks again,
Kaycee -
I create the WiFi network using the internet sharing option in my Macbook Pro from a local ethernet connection..and set the HTTP proxy settings in my new iPad.Siri runs like a charm on an other WiFi connection which doesnt require proxy but on my University connection it says that it cant handle any requests right now..
We were having the same problem here at the School I work at. By looking at some traffic logs and doing some internal testing, it appears Siri attempts to make a direct connection to the outside network using HTTPS (port 443), without using any of the proxy settings you may have configured on the Wifi network.
We've reported it as a bug to Apple but haven't heard anything back yet.
To get around it in the meantime you'll have to punch a hole in your firewall to allow Siri traffic through.
Currently Siri appears to contact IP address 17.174.4.14 over port 443. The IP address may change in the future, but that will at least get you up and going for now. We went ahead and opened the entire 17.174.4.0/24 network, as the entire block of addressess is owned by Apple.
Again, there is no gaurentee that this will not change in the future and break again.
Good luck! -
I need assistance with uploading pictures from mobile connection to computer
I need assistance with uploading pictures from my mobile to my computer
I need assistance with uploading pictures from my mobile to my computer
-
Issues with JDBC Connection Pooling
Hi all,
I'm experiencing some unexpected behaviour when trying to use JDBC Connection Pooling with my BC4J applications.
The configuraiton is -
Web Application using BC4J in local mode
Using Default Connection Stagegy
Stateless Release Mode
Retrieving Application Modules using Configuration.createRootApplicationModule( am , cf );
Returning Application Modules using Configuration.releaseRootApplicationModule( am, false );
Three application modules
AppModuleA - connects to DatabaseConnection1
AppModuleB - connects to DatabaseConnection2
AppModuleC - connects to DatabaseConnection2
My requirement is to -
Use App Module Pooling and have individual pool for each Application Module
Use JDBC Pooling and have individual pool for each Database connection
Note: All configuration was achieved in design mode (i.e. right clicking AppModule->Configurations...)
1. Initial approach -
In the configuration for each Application Module I specified the connection type as 'JDBC Datasource' and specified to approriate datasource.
Tried setting doConnecitonPooling to 'true' as well as 'false'
In the data-sources.xml I specified all the appropriate info including min-connections and max-connections.
I would expect, with the above config that BC4J would use OC4J's built in JDBC connection pooling.
2. Second approach -
In the configuration for each Application Module I specified the connection type as JDBC URL.
In the configuration I specified doConnectionPooling = 'true' as well as the max connection, max available and min available
What I experienced in both cases was that the max connections seem to be ignored as the number of connection as reported by the database (v$session) was exceeded by more than 10.
In addition to this once the load was removed the number of JDBC connecitons did not drop (I would have expected it to drop to max available connections)
My questions are -
1. When specifying to use a 'JDBC Datasource' style of connection, is it in fact OC4J that is then responsible for pooling JDBC connections? And in this case should BC4J's doConnectionPooling parameter be set to true or false?
2. Are there any known issues with the use of the JDBC Conneciton Pool as stated by the above to approaches?Thanks for the additional info. Please see my comments. below.
Sorry should have been more specififc -
1. Is each application pool using a different JDBC user? You mentioned DatabaseConnection1 and DatabaseConnection2
above; are these connections to different schemas / users? If so, BC4J will create a separate connection pool for each
JDBC user. Each connection pool will have its own maximum pool size.
Each 'DatabaseConnection' refers to a different database, actually hosted on a seperate physical server, different
schema and different user.BC4J will maintain a separate connection pool for each permutation of JDBC URL / schema. If each user is connecting
to a different DB instance then I would expect no greater than 10 DB sessions. However, if a DB instance is hosting
more than user then I would expect greater than 10 DB sessions (though still no more than 10 DB sessions per user).
2. Are all the v$session sessions related to the JDBC clients? There should be at least one additional database
session which will be related to the session that is querying v$session.
When querying the v$session table I specifically look for connections from the user in quesiton and from the machine
name in question and in doing so eliminate the database system's connections, as well as the query tools'
connection. One area I'm not sure about is the connection BC4J uses to write to its temporary tables. I am using
Stateless release mode and have not explicetly stated to save to the database but I'm wondering if it still does if so
and how does it come into the equation with max connections?BC4J's internal connections are also pooled and the limits apply as mentioned above. So, if you have specified
internal connection info for a schema which is different than the users above I would expect the additional conns.
One helpful diagnostic tool, albeit programmatic, might be to print the information about the connection pools after
your test client(s) have finished. This may be accomplished as follows:
// get a reference to the BC4J connection pool manager
import oracle.jbo.server.ConnectionPoolManagerFactory;
import oracle.jbo.server.ConnectionPoolManagerImpl;
import oracle.jbo.pool.ResourcePool;
import java.io.PrintWriter;
import java.util.Enumeration;
// get the ConnectionPoolManager. assume that it is an instance of the supplied manager
ConnectionPoolManagerImpl mgr = (ConnectionPoolManagerImpl)ConnectionPoolManagerFactory.getConnectionPoolManager();
Enumeration keys = mgr.getResourcePoolKeys();
PrintWriter pw = new PrintWriter(System.out, true);
while (keys.hasMoreElements())
Object key = keys.nextElement();
ResourcePool pool = (ResourcePool)mgr.getResourcePool(key);
System.out.println("Dumping pool statistics for pool: " + key);
pool.dumpPoolStatistics(pw);
} -
Error when refreshing WEBI report with Universe Connection Type "SSO"
Hi Experts:
We are trying to refresh the Webi report in Infoview with Universe Connection set as "Use Single Sign On when refreshing the report at view time", so that we can leverage SAP OLAP authorization variable from Bex Query which the Universe is built on.
However got the error of "incomplete logon data" after all the configurations done following below blogs:
SNC Part 1
/people/ingo.hilgefort/blog/2009/07/03/businessobjects-enterprise-and-client-side-snc-part-1-of-2
SNC Part 2
/people/ingo.hilgefort/blog/2009/07/03/businessobjects-enterprise-and-client-side-snc-part-2-of-2
We already have Win AD SSO to SAP setup, and in BO CMC, Win AD user is mapped to SAP user ID.
The SNC settings are:
- AD Account: service.test.bobj (all lower-letters)
- 32-bit gsslib on the BO server, and 64 bit on the BW server side.
- SNC0: p:service.test.bobj at DOMAIN
- SU01 --> BO_Service ; SNC: p:service.test.bobj at DOMAIN
- Entitlement system tab --> username: BO_Service
SNC Name: p:service.test.bobj at DOMAIN
- SNC settings tab:
SNC Lib: c:\winnt\gsskrb5.dll
Mutual Authentication settings: p:SAPServiceBP0 at DOMAIN
In CMC, the role can be imported if "RFC activated" option unchecked in SNC0.
I found a few threads on the same topic, but they are all not answered:
SNC Client side configuration error
SNC Configuration Error: Incomplete logon Data
Can you please provide details of the solution if you have impleted a same scenario successsfully, or any thoughts to help the investigation?
Thanks in advance!
Regards,
JonathanHi Ingo,
Sorry for taking so long to reply, we are trying to set up server side trust and enable SSO; but we still couldn't success.
What we did is:
1. We followed installation guide chapter 6, generate certificate and PSE, etc. All looks good.
2. Then we still have the "incomplete logon data" error when refreshing webi report after logon using Windows AD user ID.
3. Then we trace the PFC connection, the log is as below. We checked several BO notes, e.g. 1500150, 1461247.. The part bothers us is that we even don't have URI displayed in the log when system trying to use SNC, and we couldn't get more info on this which make us very difficult to diagnosis.
Can you please help? Thanks a lot!
Thu Mar 31 10:54:46.857 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Authentication model for SAP connectivity is SSO
Thu Mar 31 10:54:46.857 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Determining if we can connect using SNC. Calling CanAuthenticate...
Thu Mar 31 10:54:46.919 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Unable to authenticate using SNC because the URI does not meet the minimum connection requirements.
Thu Mar 31 10:54:46.919 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Determining if we can connect using SSO. Calling CanAuthenticate...
Thu Mar 31 10:54:46.919 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Authentication model for SAP connectivity is SSO
Thu Mar 31 10:54:47.013 ThreadID<1980> SAPMODULE : SAPAuthenticationService: The SAP SSO authentication process will fail because the SAP secondary credential are not properly updated and the password is blank.
Thu Mar 31 10:54:47.013 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Trying to connect to SAP using this URI : occa:sap://;PROVIDER=sapbw_bapi,R3NAME=PB0,GROUP=BI_Group1,MSHOST=sapaupdb04,LANG=en,CLIENT=100,CATALOG="ZSPUM602",CUBE="ZSPUM602/ZSPUM602_Q50"
Thu Mar 31 10:54:47.013 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Calling m_pRfcWrapper->RfcOpenEx() ...
Thu Mar 31 10:54:47.154 ThreadID<1980> SAPMODULE : SAPAuthenticationService: RfcOpenEx(...) returned 0
Thu Mar 31 10:54:47.154 ThreadID<1980> SAPMODULE : SAPAuthenticationService: Call to m_pRfcWrapper->RfcOpenEx() took 0.141 seconds
Thu Mar 31 10:54:47.154 ThreadID<1980> SAPMODULE : SAPAuthenticationService: SAPAuthenticationService::~SAPAuthenticationService -
ERROR: Cannot create authcontext with null org-Naming query failed code:21
I use OpenSSO Enterprise 8.0 Update 1 Patch1 Build 6.1(2009-June-9 12:56)
I try to evaluate the Apache 2.2 web agent.
It's been installed without errors, and both the OpenSSO and Apache server restarted.
The agent profile's been created. Also, I use the default (OpenDS) configuration repository
for OpenSSO, but an external (DSEE) user data directory.
I think I did all the required steps with regards to both directories, since I don't see any error
in the LDAP logs, each entry seems to be found as expected, the BIND operations are all
successfull.
Also, I use a sub-realm rather than the default top realm, and thus, I've modified the agent configuration
(in the agent profile) so that the login URL is now ... /UI/Login?realm=myrealm
When I try to access the Apache homepage, I get an error 500. The most recent OpenSSO server log file
(...../opensso/debug/Authentication) contains the following message:
ERROR: Cannot create authcontext with null org
The most recent agent log file (....../apache22_agent/Agent_001/logs/debug/amAgent) has the following error:
2009-07-07 17:08:11.992 Error 10513:80149a50 PolicyEngine: am_policy_evaluate: InternalException in Service::update_policy with error message:Naming query failed. and code:21
I don't know what else I can do to debug this problem and find a solution. Any idea ?Thank you Shubba,
With all available log details enabled, I now have the following messages on the agent side:
2009-07-09 10:14:51.731MaxDebug 5613:80149a50 all: No value specified for key com.sun.identity.agents.config.profile.attribute.mapping, using default value .
2009-07-09 10:14:51.731 Debug 5613:80149a50 NamingService: BaseService::doRequest(): Using server: http://portable.antibes.net:8080/opensso/namingservice.
2009-07-09 10:14:51.731MaxDebug 5613:80149a50 NamingService:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<RequestSet vers="1.0" svcid="com.iplanet.am.naming" reqid="10">
<Request><![CDATA[
<NamingRequest vers="3.0" reqid="2" sessid=""AQIC5wM2LY4SfcyaGFgc5h9Y7/kpf4f//ml82oVfNlbWxQE=@AAJTSQACMDE=#""> <GetNamingProfile> </GetNamingProfile> </NamingRequest>]]> </Request>
</RequestSet>
2009-07-09 10:14:51.712MaxDebug 5613:80149a50 NamingService: BaseService::sendRequest Request line: POST /opensso/namingservice HTTP/1.0
2009-07-09 10:14:51.712 Debug 5613:80149a50 NamingService: BaseService::sendRequest Cookie and Headers =Host: portable.antibes.net
2009-07-09 10:14:51.712 Debug 5613:80149a50 NamingService: BaseService::sendRequest Content-Length =Content-Length: 334
2009-07-09 10:14:51.712 Debug 5613:80149a50 NamingService: BaseService::sendRequest Header Suffix =Accept: text/xml
Content-Type: text/xml; charset=UTF-8
2009-07-09 10:14:51.712MaxDebug 5613:80149a50 NamingService: BaseService::sendRequest(): Total chunks: 7.
2009-07-09 10:14:51.712MaxDebug 5613:80149a50 NamingService: BaseService::sendRequest(): Sent 7 chunks.
2009-07-09 10:14:51.728 Debug 5613:80149a50 NamingService: HTTP Status = 500 (Internal Server Error)
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Http::Response::readAndParse(): Reading headers.
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Server: Apache-Coyote/1.1
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Content-Type: text/html;charset=utf-8
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Date: Thu, 09 Jul 2009 08:14:51 GMT
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Connection: close
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Http::Response::readAndParse(): Reading body content of length: 13830487939496281954
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 all: Connection::waitForReply(): returns with status success.
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: Http::Response::readAndParse(): Completed processing the response with status: success
2009-07-09 10:14:51.729MaxDebug 5613:80149a50 NamingService: <html><head><title>Apache Tomcat/6.0.18 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 500 - </h1><HR size="1" noshade="noshade"><p><b>type</b> Exception report</p><p><b>message</b> <u></u></p><p><b>description</b> <u>The server encountered an internal error () that prevented it from fulfilling this request.</u></p><p><b>exception</b> <pre>javax.servlet.ServletException: AMSetupFilter.doFilter
com.sun.identity.setup.AMSetupFilter.doFilter(AMSetupFilter.java:117)
</pre></p><p><b>root cause</b> <pre>java.lang.NullPointerException
com.iplanet.services.naming.service.NamingService.processRequest(NamingService.java:361)
com.iplanet.services.naming.service.NamingService.process(NamingService.java:352)
com.iplanet.services.comm.server.PLLRequestServlet.handleRequest(PLLRequestServlet.java:180)
com.iplanet.services.comm.server.PLLRequestServlet.doPost(PLLRequestServlet.java:134)
javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
com.sun.identity.setup.AMSetupFilter.doFilter(AMSetupFilter.java:91)
</pre></p><p><b>note</b> <u>The full stack trace of the root cause is available in the Apache Tomcat/6.0.18 logs.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/6.0.18</h3></body></html>
2009-07-09 10:14:51.729 Warning 5613:80149a50 NamingService: BaseService::doHttpPost() failed, HTTP error = 500
2009-07-09 10:14:51.729 Debug 5613:80149a50 NamingService: NamingService()::getProfile() returning with error code HTTP error.
2009-07-09 10:14:51.729 Error 5613:80149a50 PolicyEngine: am_policy_evaluate: InternalException in Service::update_policy with error message:Naming query failed. and code:21
In my Tomcat server (OpenSSO server web container), I have the following errors:
Jul 9, 2009 10:12:35 AM org.apache.catalina.startup.Catalina start
INFO: Server startup in 22746 ms
[Fatal Error] :2:46: Element type "NamingRequest" must be followed by either attribute specifications, ">" or "/>".
java.lang.NullPointerException
at com.iplanet.services.naming.service.NamingService.processRequest(NamingService.java:361)
at com.iplanet.services.naming.service.NamingService.process(NamingService.java:352)
at com.iplanet.services.comm.server.PLLRequestServlet.handleRequest(PLLRequestServlet.java:180)
at ...
It seems like the problem comes from the couple of closing square brackets in the NamingRequest tag:
</NamingRequest>]]>
I don't know where it comes from, so if you've an idea I'd enjoy .
Cheers, -
Could anyone assist me on how to provide a model answer to the first question and also check my response to the second question
Question
A Web crawler is a program which wanders around the World-wide Web looking for specific documents required by the user of the crawler, for example the user may want the URLs of all the documents which contain the words �Java� and �compiler�. What classes in Java do you think would be used in such a program? In answering this question you will need to explain your choice of classes with respect to the functions required of the Web crawler.
Question
A mailing list program is used to send emails to computer users who register with a particular list. This question asks you to provide the code for methods within a class MailingList. This class implements objects which associate a particular mailing list with a group of users. Assume the existence of a class User which describes users and that mailing lists are identified by a string.
How would you implement such a mailing lists using facilities in java.util?
Using your suggested implementation write down the code for the following methods:
A constructor for MailingList which has zero arguments.
A method addUser() which has two arguments: a string argument which represents a mailing list and a User object which represents a user. The method adds the user to the mailing list.
A method removeUser which has two arguments: a string argument which represents the mailing list and an object which represents a user. The method removes the user from the mailing list.
A method getNumbers() which has no arguments, but which returns with the total number of users registered with all the mailing lists
A method deleteMailingList() has a string argument that is a mailing list. It deletes the mailing list and all the users associated with it.
A method ListNumber() which has no arguments and which returns with an int which is the number of mailing lists that are currently administered.
Do not write and code for error processing
In order to implement a mailing list I would use the Hashtable class to map mailing lists (key objects in the hashtable) to groups of users(value objects in the hashtable)
The key objects would be String objects
I would have a class user which would describe a general user and then subclass this to implement different types of users.
public class MailingList{
private int totRegistered;
private String ml
private int noOfMlists;
private User usr;
public MailingList(){
Hashtable mList=new Hashtable();
String ml=new MailLst()
public void addUser(String x, User us){
mList=x;
usr=us;
mList.put(x,usr);
public void removeUser(String ml, User u){
mList.remove(u);
public int getNumbers(){
return totRegistered;
public void deleteMailingList(String ml){
this.ml=ml;
mList.remove(ml);
public int ListNumber(){
return noOfMlists;
Many thanksre: Question
"A Web crawler is a program which wanders around the World-wide Web looking for specific documents required by the user of the crawler, for example the user may want the URLs of all the documents which contain the words �Java� and �compiler�.
What classes in Java do you think would be used in such a program? In answering this question you will need to explain your choice of classes with respect to the functions required of the Web crawler."
I am very interested to know how the implementation below would work-I am unsure as to how it can search through links ? perhaps with a never ending while loop-I am also not sure how StreamTokenizer actually functions and whether it would really solve part of the problem.
My incomplete answer
To search and cache the URLs of all the documents which contain the words �Java� and �compiler� you must first make connection to web documents, set up inputstreams to read from the URL documents and then process the text using StreamTokenizer � you can then check (with the aid of an algorithm) if any tokens in the document contain the text searched for � if they do then this URL can be flagged and cached ( added to a Hashtable)
You would need to implement the following classes for the related functions
Web classes in java.net - Their main functions are to allow you to access Web documents, for example they let you read the documents.
URL and URLConnection classes(Web classes) allow you to access Web documents via their URL�s (Uniform Resource Locators); pointers to "resources" on the World Wide Web
URL class handles connections to Web documents. A URL constructor which has a string parameter treats the string parameter as the URL eg,
URL oldWeb=new URL(�http://info.cern.ch/hyper/old.tex�);
sets up a URL object associated with the WorldWideWeb protocol (http), a host info.cern.ch, a directory hyper and a file old.text
All URL constructors throw MalformedURLException if the format of the URL is incorrect, if no protocol is specified, or an unknown protocol is found.
The abstract class URLConnection is the superclass of all classes that represent a communications link between the application and a URL. Instances of this class can be used both to read from and to write to the resource referenced by the URL
In order to process the text within a URL object the simplest strategy is to use the method openStream() defined in URL. This opens an input stream connected to the URL object so that the input stream methods can be used to read from the object, e.g.;
URL queryWeb=new URL(�http://infor.cern.ch/oldFin/lemor.txt�);
try{
InputStream is=queryWeb.openStream();
DataInputStream ds=new DataInputStream(is);
//code for processing the data input stream ds � the HTML and text contained in the URL object can be accessed using methods such as readChar defined in DataInputStream
catch (IOException e){
System.out.println(�Problem with connection�);}
You will need to import the java.io package in order to carry out input output operations above
StreamTokenizer class:
StreamTokenizer splits up the characters which are read by an InputStream. The constructor for this class has one argument which is a Stream object
StreamTokenizer st=new StreamTokenizer(is);
sets up a StreamTokenizer based on the InputStream or Reader object is. This means that tokens can be read from this stream, with tokens being delineated by white space.
The main method used to extract tokens is nextToken() which returns an integer which describes what token has been read
The string tokenizer class allows an application to break a string into tokens.
StringTokenizer st = new StringTokenizer(s, ��);
while (st.hasMoreTokens()) {
println(st.nextToken());
I cant think of any related classes ? Have I missed something -
Could I please be assisted with recurrent failure for iTunes to install correctly on my Dell desktop running Windows7 Professional? Thank you. Bobby
Well, I found a workaround for my issue. It's described in this article:
[url="iPod not recognized correctly on Toshiba laptop"]http://docs.info.apple.com/article.html?artnum=300836[/url]
By disabling the "Sandard Enhanced PCI to USB Host Controller" in device manager, my iPod was correctly recognized and I was able to sync with iTunes. That's the good news.
The bad news is that: a) Windows warns me that my "device can operate faster if connected to a USB 2.0 port" - and the sync with iTunes runs slowly; b) if I reanable that USB device (as suggested in the Apple article, my iPod is no longer recognized.
For now, I'll just disable this device when I need to sync, then reenable it when done. But this is going to be a pain. I really don't consider this a fix. It's really annoying that the iPod is the only USB device that requires this kind of nonsense.
Looking forward to a "real" fix for this some time soon. -
Problems with SSH: Connection Refused
Greetings fellow Arch users,
I have hit a bit of a snag that I could really use some extra help getting around. I've tried everything I can think of (and everything that Google thought might work) and I have my back rather against a wall, so I thought I'd come here to see if anyone can offer some advice.
To make a long story short, I am a college student and am attempting to set up an ssh server on a desktop at my house so I can access it remotely from the college. I have the computer set up and the server running, however I am having difficulty making connections to it from my laptop. I know that the server is running, because I can log into it both from the server itself (sshing into local host) and from my laptop when I use the internal IP address.
The server is on a static IP address within the network(192.168.0.75), and my router is configured to forward TCP port 1500 to it (I'm using 1500 as the port for my ssh server). However, when I attempt to log into the ssh server using my network's external IP address, the connection is refused. I used nmap to scan my network and found that, even though the proper ports are forwarded to the proper place as far as my Router's configuration interface is concerned, port 1500 is not listed as one of the open TCP ports. I also, to test it, temporarily disabled the firewalls on both the server and the client. That didn't help. The command that I am running is:
ssh -p 1500 douglas@[external ip address
As I am really not sure what is causing this problem, I don't know what information to provide. So here is everything that my inexperienced mind sees as likely being important. If you need anything more, let me know and I will do my best to provide it.
Here is the sshd_config file from my server.
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
Port 1500
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
# The default requires explicit activation of protocol 1
#Protocol 2
# HostKey for protocol version 1
#HostKey /etc/ssh/ssh_host_key
# HostKeys for protocol version 2
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
# Lifetime and size of ephemeral version 1 server key
#KeyRegenerationInterval 1h
#ServerKeyBits 1024
# Ciphers and keying
#RekeyLimit default none
# Logging
# obsoletes QuietMode and FascistLogging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
PermitRootLogin no
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
#RSAAuthentication yes
#PubkeyAuthentication yes
# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#RhostsRSAAuthentication no
# similar for protocol version 2
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# RhostsRSAAuthentication and HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
# Change to no to disable s/key passwords
ChallengeResponseAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
#X11Forwarding no
#X11DisplayOffset 10
#X11UseLocalhost yes
PrintMotd no # pam does that
#PrintLastLog yes
#TCPKeepAlive yes
#UseLogin no
UsePrivilegeSeparation sandbox # Default for new installations.
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS yes
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# override default of no subsystems
Subsystem sftp /usr/lib/ssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# ForceCommand cvs server
The ouptut of ip addr when run on the server:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:21:9b:3a:be:94 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.75/24 brd 192.168.255.0 scope global enp8s0
valid_lft forever preferred_lft forever
inet6 fe80::221:9bff:fe3a:be94/64 scope link
valid_lft forever preferred_lft forever
Here is the output from running nmap on the network:
Starting Nmap 6.40 ( http://nmap.org ) at 2013-09-28 21:05 EDT
Initiating Ping Scan at 21:05
Scanning address [2 ports]
Completed Ping Scan at 21:05, 0.01s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 21:05
Completed Parallel DNS resolution of 1 host. at 21:05, 0.05s elapsed
Initiating Connect Scan at 21:05
Scanning pa-addresss.dhcp.embarqhsd.net (address) [1000 ports]
Discovered open port 80/tcp on address
Discovered open port 443/tcp on address
Discovered open port 23/tcp on address
Discovered open port 21/tcp on address
Completed Connect Scan at 21:05, 4.08s elapsed (1000 total ports)
Nmap scan report for pa-address.dhcp.embarqhsd.net (address)
Host is up (0.036s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
21/tcp open ftp
23/tcp open telnet
80/tcp open http
443/tcp open https
8080/tcp filtered http-proxy
Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 4.19 seconds
Here is the ssh_config client-side:
# $OpenBSD: ssh_config,v 1.27 2013/05/16 02:00:34 dtucker Exp $
# This is the ssh client system-wide configuration file. See
# ssh_config(5) for more information. This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.
# Configuration data is parsed as follows:
# 1. command line options
# 2. user-specific file
# 3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.
# Site-wide defaults for some commonly used options. For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.
# Host *
# ForwardAgent no
# ForwardX11 no
# RhostsRSAAuthentication no
# RSAAuthentication yes
# PasswordAuthentication yes
# HostbasedAuthentication no
# GSSAPIAuthentication no
# GSSAPIDelegateCredentials no
# BatchMode no
# CheckHostIP yes
# AddressFamily any
# ConnectTimeout 0
# StrictHostKeyChecking ask
# IdentityFile ~/.ssh/identity
# IdentityFile ~/.ssh/id_rsa
# IdentityFile ~/.ssh/id_dsa
# Port 22
Protocol 2
# Cipher 3des
# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
# MACs hmac-md5,hmac-sha1,[email protected],hmac-ripemd160
# EscapeChar ~
# Tunnel no
# TunnelDevice any:any
# PermitLocalCommand no
# VisualHostKey no
# ProxyCommand ssh -q -W %h:%p gateway.example.com
# RekeyLimit 1G 1h
Output of ssh -v during connection attempt:
OpenSSH_6.3, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /home/douglas/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: ssh_connect: needpriv 0
debug1: Connecting to address [address] port 1500.
debug1: connect to address address port 1500: Connection refused
ssh: connect to host address port 1500: Connection refused
Thank you guys ahead of time. Getting this server operational is hardly critical, it is just a side project of mine, but I would really like to see it working.
Douglas Bahr Rumbaugh
Last edited by douglasr (2013-09-29 02:58:56)Okay, so I finally have the opportunity to try and log in from a remote network. And. . . it doesn't work. Which is just my luck because I now need to wait an entire week, at least, before I can touch the server again. Anyway, running ssh with the maximum verbosity I get this output:
douglas ~ $ ssh -vvv -p 2000 address
OpenSSH_6.3, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /home/douglas/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: ssh_connect: needpriv 0
debug1: Connecting to address [address] port 2000.
debug1: connect to address address port 2000: Connection timed out
ssh: connect to host address port 2000: Connection timed out
It takes a minute or two for the command to finish with the connection timeout, as one would expect. And yes, I am reasonably sure that the address that I am using is my home network's external IP. It is dynamic, but I checked it before I left which was just over an hour ago. I guess that it may have changed. I'll know that for sure in the morning, when my server sends me an automatic email with the network's current address. In the meantime I am operating under the assumption that the address I am using is correct. What else could be the problem? -
Multiple queries with 1 connection
Can I execute multiple queries with one connection?
//Example -
<%
String firstconn;
Class.forName("org.gjt.mm.mysql.Driver");
// create connection string
firstconn = "jdbc:mysql://localhost/profile?user=mark&password=mstringham";
// pass database parameters to JDBC driver
Connection aConn = DriverManager.getConnection(firstconn);
// query statement
Statement firstSQLStatement = aConn.createStatement();
String firstquery = "UPDATE auth_users SET last_log='" + rightnow + "'WHERE name='" + username + "' ";
// get result code
int firstSQLStatus = firstSQLStatement.executeUpdate(firstquery);
// close connection
firstSQLStatement.close();
%>
Now, instead of building a new connection for each query, can I use the same connection info for another query?
if so - how do you do this?
thanks for any help.
MarkCreate multiple statement objects from your connection. It's a good idea to close these in a finally block after you're done with them
Connection conn = null;
Statement stmt1 = null;
Statement stmt2 = null;
try {
conn = DriverManager.getConnection();
stmt1 = conn.createStatement();
// some sql here
stmt2 = conn.createStatement();
// some more sql here
} finally {
if ( stmt1 != null ) stmt1.close();
if ( stmt2 != null ) stmt2.close(); -
Error message when connecting to iTunes says my new iPad 4 can't be connected as it requires version 10.7 or later. Surely a new iPad already has the latest version, and if it doesn't how do I get it?
Not the iPad - your computer must be running iTunes 10.7 in order to sync. It has nothing to do with iTunes on the iPad.
You can download iTunes for your computer here.
http://www.apple.com/itunes/
Maybe you are looking for
-
Why can't I make a photobook with IPhoto 09 anymore
I cannot make a photo book anymore HELP?
-
Stuck while installing OAS 10.1.2 on RHEL 5
OVERVIEW To relocate from and old server to a new server OLD: - OAS 10g (10.1.2.3) - RHEL4 - Oracle Database (10.2.0.4) NEW: - OAS 10g (10.1.2.3) - RHEL5 - Oracle Database (10.2.0.5) Completed following tasks: <li> Install 10g Database Sofware in "or
-
I have Adobe Acrobat XI (ver 11.0.07). Adobe Help instructs me to find the document comparison command at "View > Compare Documents," however, this command does exist on my screen. My View drop-down list does not include a command for Compare Docume
-
Is it possible in to trigger action in backing bean on page unload event?
-
hi, am using dreamweaver cs5.5 ... i have stored lotta hyperlinks in a txt file (more than 50) ... want to add them to my site ... so can i make all those 50 links active or working in an html page automatically ??? thanx