DDL Replication is not supported in DB2
I found this error after i create DDL process on DB2,
have you got same problem? and how to fix this isue ?
ERROR OGG-00453 DDL Replication is not supported for this database.thanks Riyas
Hi Riyas,
Just found a document which states that DDL replication is not supported in DB2 database.
Does Oracle GoldenGate(OGG) support DDL Replication for DB2 (Doc ID 1303729.1)
Thanks,
Kamal.
Similar Messages
-
DDL replication is not working...
Hi,
The Database version we are using is 11.2.0.2.0 and Golden gate is 11.2.1.0.1. We are having a Hub and spoke environment where Hub and spoke schema's are in different servers.The DML replication is working fine but when we enable DDL replication the replication is not happening like when the table got created in Hub schema the same is not replicated in Spoke.
Kindly let us know if we miss anything or how to go ahead with it.
Thanks in Advance.
Regards,
RajHi Stevencallan,
Thanks for your response.
We are replicating between one Hub and spoke environment only. Our requirement is the tables/packages will be created in Hub environment at run time and it need to be replicated at spoke. Whether DDL replication allows such replications?
Kindly provide some detail information which will really help us .
Thanks in Advance.
Regards,
Raj -
GoldenGate - Oracle to MSSQL - handling DDL replication abends on replicat
Sorry for the cross-post. I clearly failed to actually read the "there's a GoldenGate forum" sticky...
Hello -
Very much a GoldenGate noob here, so please excuse me if I fumble on the terms / explanations - still learning.
We've recently been lucky enough to have a GoldenGate pump put into our campus environment to support our data warehouse. We don't manage the pump or source systems - just the target.
Pump: GoldenGate 11.1.1.1
Source: Oracle 11g
Target: MSSQL 2008R2
~5,000 tables of hugely varying sizes
The extract is apparently configured to push DDL changes, which is clearly not going to work with a MSSQL 2008 R2 target. We're getting abend messages on the replicat and I'd like to know if we can bypass them on the replicat or need to ask that the extract process be modified to exclude DDL operations.
The replicat error logs show exception:
OGG-00453: DDL Replication is not supported for this database
On the replicat I've tried including:
DDL EXCLUDE ALL
DDLERROR DEFAULT DISCARD (or DDLERROR DEFAULT IGNORE - neither let the process continue)
The replicat just abends with the same OGG-00453 exception.
My question: Can I gracefully handle these abends on the replicat? Or do I need to request the extract be updated with "DDL EXCLUDE ALL." Ideally, we can handle this on the replicat - I'm trying to be considerate of the main GoldenGate admin's time and also avoid any disruption of the extract.
Any direction / info / ideas much appreciated.
Thank you,
Eric924681 wrote:
Sorry for the cross-post. I clearly failed to actually read the "there's a GoldenGate forum" sticky...
Hello -
Very much a GoldenGate noob here, so please excuse me if I fumble on the terms / explanations - still learning.
We've recently been lucky enough to have a GoldenGate pump put into our campus environment to support our data warehouse. We don't manage the pump or source systems - just the target.
Pump: GoldenGate 11.1.1.1
Source: Oracle 11g
Target: MSSQL 2008R2
~5,000 tables of hugely varying sizes
The extract is apparently configured to push DDL changes, which is clearly not going to work with a MSSQL 2008 R2 target. We're getting abend messages on the replicat and I'd like to know if we can bypass them on the replicat or need to ask that the extract process be modified to exclude DDL operations.
The replicat error logs show exception:
OGG-00453: DDL Replication is not supported for this database
On the replicat I've tried including:
DDL EXCLUDE ALL
DDLERROR DEFAULT DISCARD (or DDLERROR DEFAULT IGNORE - neither let the process continue)
The replicat just abends with the same OGG-00453 exception.
My question: Can I gracefully handle these abends on the replicat? Or do I need to request the extract be updated with "DDL EXCLUDE ALL." Ideally, we can handle this on the replicat - I'm trying to be considerate of the main GoldenGate admin's time and also avoid any disruption of the extract.
Any direction / info / ideas much appreciated.
Thank you,
EricI find strange that DDLERROR DEFAULT IGNORE does not work, are you sure you placed it properly? did you restarted the replicats after doing the change?
Why dont you try specifying the error like:
DDLERROR <error> IGNORE -
Azure geo-replication does not work - Feature is disabled
When I try and add geo-replication to a database (s0) in which is on the east coast (tried both the azure management console and the azure portal); it creates the west coast server and
I guess the next step is to create the replicattion database., but it fails. The error I get is "Feature is disabled" and does not create the database
Any idea what feature needs to be enabled for this to work? the entire process seems pretty click forward., no idea why it would fail.
Edit: details on error
OPERATIONNAME: Update
SQL database
Status: Failed
SUBSTATUS: Bad
Request (HTTP Status Code: 400)
Level: Error
PROPERTIES: statusCode:BadRequest
statusMessage:{"code":"45150","message":"Feature is disabled.","target":null,"details":[{"code":"45150","message":"Feature is disabled.","target":null,"severity":"16"}],"innererror":[]}Hi,
Thanks for posting here.
I suggest you to check this link.
Standard Geo-Replication for Azure SQL Database:
http://msdn.microsoft.com/en-us/library/azure/dn758204.aspx
http://blogs.technet.com/b/blainbar/archive/2014/08/12/step-by-step-azure-sql-database-introduces-geo-restore-standard-geo-replication-and-auditing.aspx
peer 2 peer replication is not supported on standard edition, but bi-directional replication is. Here is a tutorial on how to make this work.
http://sqlblog.com/blogs/hilary_cotter/archive/2011/10/28/implementing-bi-directional-transactional-replication.aspx
Hope this helps you.
Girish Prajwal -
Data types in Sql Server 2012 not supported by replication
Hi All,
I am planning to configure replication on SQL server 2012.I need to know what data types are not supported in replication and if there are any other boundations. kindly suggest.
Regards
RahulWhat type of replication are you looking to implement?
Have you had a look at this thread:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bbec1a86-14cd-4d90-8b62-c875de4cf9a0/data-type-not-supported-in-sql-server-2008-merge-replication?forum=sqlreplication -
Team , Thanks for looking into this ..
As a last resort on optimizing my stored procedure ( Below ) i wanted to create a Selective XML index ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
Index feature is not supported for the current database version.. How ever
EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
Is there ANY alternative way i can optimize below stored proc ?
Thanks in advance for your response(s) !
/****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- EXEC [dbo].[MN_Process_DDLSchema_Changes]
ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
AS
BEGIN
SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
select getdate() as getdate_0
DECLARE @XML XML , @Prev_Insertion_time DATETIME
-- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
-- PRINT '1'
CREATE TABLE #Temp
EventName VARCHAR(100),
Time_Stamp_EE DATETIME,
ObjectName VARCHAR(100),
ObjectType VARCHAR(100),
DbName VARCHAR(100),
ddl_Phase VARCHAR(50),
ClientAppName VARCHAR(2000),
ClientHostName VARCHAR(100),
server_instance_name VARCHAR(100),
ServerPrincipalName VARCHAR(100),
nt_username varchar(100),
SqlText NVARCHAR(MAX)
CREATE TABLE #XML_Hold
ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
BufferXml XML
select getdate() as getdate_01
INSERT INTO #XML_Hold (BufferXml)
SELECT
CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
FROM sys.dm_xe_session_targets xet
INNER JOIN sys.dm_xe_sessions xes
ON xes.address = xet.event_session_address
WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
--RETURN
--SELECT * FROM #XML_Hold
select getdate() as getdate_1
-- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
FOR
PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
--RETURN
--CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
--SELECT GETDATE() AS GETDATE_2
-- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
--CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
--USING XML INDEX [IX_XML_Hold]
---- FOR VALUE
-- --FOR PROPERTY
-- FOR PATH
--SELECT GETDATE() AS GETDATE_3
--PRINT '2'
-- RETURN
SELECT GETDATE() GETDATE_3
INSERT INTO #Temp
EventName ,
Time_Stamp_EE ,
ObjectName ,
ObjectType,
DbName ,
ddl_Phase ,
ClientAppName ,
ClientHostName,
server_instance_name,
nt_username,
ServerPrincipalName ,
SqlText
SELECT
p.q.value('@name[1]','varchar(100)') AS eventname,
p.q.value('@timestamp[1]','datetime') AS timestampvalue,
p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
FROM #XML_Hold
CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
SELECT GETDATE() GETDATE_4
-- SELECT * FROM #TEMP
-- SELECT COUNT(*) FROM #TEMP
-- SELECT GETDATE()
-- RETURN
-- PRINT '3'
--RETURN
INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
[UserName]
,[DbName]
,[ObjectName]
,[client_app_name]
,[ClientHostName]
,[ServerName]
,[SQL_TEXT]
,[EE_Time_Stamp]
,[Event_Name]
SELECT
CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
ELSE T.nt_username
END
,T.DbName
,T.objectname
,T.clientappname
,t.ClientHostName
,T.server_instance_name
,T.sqltext
,T.Time_Stamp_EE
,T.eventname
FROM
#TEMP T
/** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
-- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
WHERE ddl_Phase ='Commit'
AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
AND MN.[DbName] = T.DbName
AND MN.[Event_Name] = T.EventName
AND MN.[ObjectName]= T.ObjectName
AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
-- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
--SELECT GETDATE()
--PRINT '4'
--RETURN
SELECT
top 100
[EE_Time_Stamp]
,[ServerName]
,[DbName]
,[Event_Name]
,[ObjectName]
,[UserName]
,[SQL_TEXT]
,[client_app_name]
,[Created_Date]
,[ClientHostName]
FROM
[dbo].[MN_DDLSchema_Changes_log]
ORDER BY [EE_Time_Stamp] desc
-- select getdate()
-- ** DELETE EVENTS after logging into Physical table
-- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
-- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
-- SELECT @XML
SELECT GETDATE() GETDATE_5
END
GO
Rajkumar Yelugu@@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
(1 row(s) affected)
Compatibility level is set to 110 .
One of the limitation states - XML columns with a depth of more than 128 nested nodes
How do i verify this ? Thanks .
Rajkumar Yelugu -
I am on Source side and DDL replication is enabled.
I am on Source side and DDL replication is enabled.
I would be providing initial dump of TABLE and VIEW objects to target using Export DP to Target.
And start Golden Gate Extract, Pump on Source.
Source PRM file has Table list.
EXTRACT ex1test
DDL INCLUDE MAPPED
EXTTRAIL ./trails/l1
TABLE ABCD.T100;
TABLE ABCD.T200;
TABLE ABCD.T300;
Objective is ---- If I change VIEW definition at Source, it should be reflected to Target
Question is --- In my PRM file, how I can include VIEWs???
If the source has 10 views , only 5 are replicated to target ..
DDL changes for only the ones in the target should be considered other must be excluded
Thanks and Regards,
KurianHi Kurian,
Oracle GoldenGate supports VIEW replication in both Classic and Integrated Modes. But we do have some limitations on it.,
1. Capture from a view is supported when Extract is in initial-load mode. The data is captured from the Source View and not the Redologs.
2. Changes made to the data of the view will not be captured.
3. View replication is only supported for inherently updateable views in which case the source table and target view structures must be identical.
You can exclude DDL Objects using the ddl_filter.sql
To know more about ddl_filter.sql, follow the below link.,
http://docs.oracle.com/goldengate/1212/gg-winux/GIORA/ddl.htm#GIORA316
Under this,
13.8.1 Filtering with PL/SQL Code
Regards,
Veera -
Is there any restriction on DDL replication with 11.2.0.1 GG version on HP UNix
Is there any restriction on DDL replication with 11.2.0.1 GG version on HP Unix
Here is few:
1. ALTER TABLE ... MOVE TABLESPACE
2. DDL on nested tables
3. ALTER DATABASE and ALTER SYSTEM (these are not considered to be DDL)
4. DDL on a standby database
In addition, classic capture mode does not support DDL that involves password-based column encryption, such as:
1.CREATE TABLE t1 ( a number, b varchar2(32) ENCRYPT IDENTIFIED BY my_password);
2.ALTER TABLE t1 ADD COLUMN c varchar2(64) ENCRYPT IDENTIFIED BY my_password
I would request you to check documentation. you can check that here:http://www.oracle.com/technetwork/middleware/goldengate/documentation/index.html
-Onkar -
doing some test setup of 2 master sites using Oracle advanced replication feature, but i am seems lost, so Oracle advance replication will NOT duplicate DDL from one master site to another site??? each time i need to run the procedure dbms_repcat.execute_ddl, in order to bring over changes?
i have an application that will dynamically modify table structure, without interference of DBA, i may have trouble should Oracle advance replication work this way...
like all gurus here give me some insight information. thanksWhat version of Oracle?
In any currently supported Oracle version AR is obsolete and has been replaced by Streams. -
Master Data loading got failed: error "Update mode R is not supported by th
Hello Experts,
I use to load master data for 0Customer_Attr though daily process chain, it was running successfully.
For last 2 days master data loading for 0Customer_Attr got failed and it gives following error message:
"Update mode R is not supported by the extraction API"
Can anyone tell me what is that error for? how to resolve this issue?
Regards,
NiravHi
Update mode R error will come in the below case
You are running a delta (for master data) which afils due to some error. to resolve that error, you make the load red and try to repeat the load.
This time the load will fail with update mode R.
As repeat delta is not supported.
So, now, the only thing you can do is to reinit the delta(as told in above posts) and then you can proceed. The earlier problem has nothing to do with update mode R.
example your fiorst delta failed with replication issue.
only replicating and repeaing will not solve the update mode R.
you will have to do both replication of the data source and re-int for the update mode R.
One more thing I would like to add is.
If the the delat which failed with error the first time(not update mode R), then
you have to do init with data transfer
if it failed without picking any records,
then do init without data transfer.
Hope this helps
Regards
Shilpa
Edited by: Shilpa Vinayak on Oct 14, 2008 12:48 PM -
"DBSL does not support extended connect protocol" while configuring SSFS
Hi, I'm trying to configure ssfs on ERP EHP7 on HANA Database system. Doing it with this guide - SSFS Implementation for Oracle Database
But when I'm trying to test connection with r3trans I got following error in the log:
4 ETW000 [ dev trc,00000] read_con_info_ssfs(): DBSL does not support extended connect protocol
4 ETW000 ==> ssfs won't be used 26 0.004936
I already updated DBSL_LIB to the latest version, but it doesn't help.
Here is full log:
4 ETW000 C:\usr\sap\CM1\DVEBMGS04\exe\R3trans.EXE version 6.24 (release 741 - 16.05.14 - 20:14:06).
4 ETW000 unicode enabled version
4 ETW000 ===============================================
4 ETW000
4 ETW000 date&time : 02.06.2014 - 13:49:16
4 ETW000 control file: <no ctrlfile>
4 ETW000 R3trans was called as follows: C:\usr\sap\CM1\DVEBMGS04\exe\R3trans.EXE -d
4 ETW000 trace at level 2 opened for a given file pointer
4 ETW000 [ dev trc,00000] Mon Jun 02 13:49:16 2014 106 0.000106
4 ETW000 [ dev trc,00000] db_con_init called 36 0.000142
4 ETW000 [ dev trc,00000] set_use_ext_con_info(): ssfs will be used to get connect information
4 ETW000 61 0.000203
4 ETW000 [ dev trc,00000] determine_block_commit: no con_hdl found as blocked for con_name = R/3
4 ETW000 26 0.000229
4 ETW000 [ dev trc,00000] create_con (con_name=R/3) 17 0.000246
4 ETW000 [ dev trc,00000] Loading DB library 'dbhdbslib.dll' ... 46 0.000292
4 ETW000 [ dev trc,00000] DlLoadLib success: LoadLibrary("dbhdbslib.dll"), hdl 0, count 1, addr 000007FEED100000
4 ETW000 3840 0.004132
4 ETW000 [ dev trc,00000] using "C:\usr\sap\CM1\DVEBMGS04\exe\dbhdbslib.dll" 21 0.004153
4 ETW000 [ dev trc,00000] Library 'dbhdbslib.dll' loaded 21 0.004174
4 ETW000 [ dev trc,00000] function DbSlExpFuns loaded from library dbhdbslib.dll 42 0.004216
4 ETW000 [ dev trc,00000] Version of 'dbhdbslib.dll' is "741.10", patchlevel (0.22) 81 0.004297
4 ETW000 [ dev trc,00000] function dsql_db_init loaded from library dbhdbslib.dll 25 0.004322
4 ETW000 [ dev trc,00000] function dbdd_exp_funs loaded from library dbhdbslib.dll 41 0.004363
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 47 0.004410
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=39,arg_p=0000000000000000) 24 0.004434
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.004452
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=10,arg_p=000000000205F170) 22 0.004474
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 17 0.004491
4 ETW000 [ dev trc,00000] New connection 0 created 17 0.004508
4 ETW000 [ dev trc,00000] 0: name = R/3, con_id = -000000001, state = DISCONNECTED, tx = NO , bc = NO , oc = 000, hc = NO , perm = YES, reco = NO , info = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO , prog =
4 ETW000 38 0.004546
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=10,arg_p=0000000141BAEDB0) 44 0.004590
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 19 0.004609
4 ETW000 [ dev trc,00000] db_con_connect (con_name=R/3) 19 0.004628
4 ETW000 [ dev trc,00000] determine_block_commit: no con_hdl found as blocked for con_name = R/3
4 ETW000 24 0.004652
4 ETW000 [ dev trc,00000] find_con_by_name found the following connection: 17 0.004669
4 ETW000 [ dev trc,00000] 0: name = R/3, con_id = 000000000, state = DISCONNECTED, tx = NO , bc = NO , oc = 000, hc = NO , perm = YES, reco = NO , info = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO , prog =
4 ETW000 164 0.004833
4 ETW000 [ dev trc,00000] read_con_info_ssfs(): reading connect info for connection R/3 34 0.004867
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=74,arg_p=0000000000000000) 24 0.004891
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=15) 19 0.004910
4 ETW000 [ dev trc,00000] read_con_info_ssfs(): DBSL does not support extended connect protocol
4 ETW000 ==> ssfs won't be used 26 0.004936
4 ETW000 [ dev trc,00000] { DbSlHDBConnect(con_info_p=0000000000000000) 31 0.004967
4 ETW000 [ dev trc,00000] DBHDBSLIB : version 741.10, patch 0.022 (Make PL 0.26) 34 0.005001
4 ETW000 [ dev trc,00000] HDB shared library (dbhdbslib) patchlevels (last 10) 32 0.005033
4 ETW000 [ dev trc,00000] (0.022) Get database version via dbsl call (note 1976918) 24 0.005057
4 ETW000 [ dev trc,00000] (0.020) FDA: Core Dump in SELECT ... FOR ALL ENTRIES for tables with strings (note 1970276)
4 ETW000 32 0.005089
4 ETW000 [ dev trc,00000] (0.020) SQL DDL with data aging (note 1897636) 21 0.005110
4 ETW000 [ dev trc,00000] (0.017) Datatype NCLOB missing in tablesize calculation (note 1952609)
4 ETW000 30 0.005140
4 ETW000 [ dev trc,00000] (0.014) Tablesize calculation for HANA optimized (note 1952609) 25 0.005165
4 ETW000 [ dev trc,00000] (0.014) Native SQL UPSERT with DataAging (note 1897636) 21 0.005186
4 ETW000 [ dev trc,00000] (0.014) DBSL supports HANA revision number up to 3 digits (note 1952701)
4 ETW000 27 0.005213
4 ETW000 [ dev trc,00000] (0.010) Quotes missing by FAE with the hint dbsl_equi_join (note 1939234)
4 ETW000 28 0.005241
4 ETW000 [ dev trc,00000] (0.007) Obsere deactivate aging flag (note 1897636) 24 0.005265
4 ETW000 [ dev trc,00000] (0.007) Calculated record length for INSERT corrected (note 1897636)
4 ETW000 27 0.005292
4 ETW000 [ dev trc,00000] 15 0.005307
4 ETW000 [ dev trc,00000] -> init() 21 0.005328
4 ETW000 [ dev trc,00000] STATEMENT_CACHE_SIZE = 1000 181 0.005509
4 ETW000 [ dev trc,00000] -> init() 505 0.006014
4 ETW000 [ dev trc,00000] -> loadClientRuntime() 27 0.006041
4 ETW000 [ dev trc,00000] Loading SQLDBC client runtime ... 19 0.006060
4 ETW000 [ dev trc,00000] SQLDBC Module : C:\usr\sap\CM1\hdbclient\libSQLDBCHDB.dll 779 0.006839
4 ETW000 [ dev trc,00000] SQLDBC Runtime : libSQLDBCHDB 1.00.68 Build 0384084-1510 74 0.006913
4 ETW000 [ dev trc,00000] SQLDBC client runtime is 1.00.68.0384084 45 0.006958
4 ETW000 [ dev trc,00000] -> getNewConnection() 28 0.006986
4 ETW000 [ dev trc,00000] <- getNewConnection(con_hdl=0) 78 0.007064
4 ETW000 [ dev trc,00000] -> checkEnvironment(con_hdl=0) 34 0.007098
4 ETW000 [ dev trc,00000] -> connect(con_info_p=0000000000000000) 27 0.007125
4 ETW000 [ dev trc,00000] Try to connect via secure store (DEFAULT) on connection 0 ... 62 0.007187
4 ETW000 [ dev trc,00000] -> check_db_params(con_hdl=0) 61365 0.068552
4 ETW000 [ dev trc,00000] Attach to HDB : 1.00.68.384084 (NewDB100_REL) 7595 0.076147
4 ETW000 [ dev trc,00000] Database release is HDB 1.00.68.384084 49 0.076196
4 ETW000 [ dev trc,00000] INFO : Database 'HDB/00' instance is running on 'hanaserver' 6867 0.083063
4 ETW000 [ dev trc,00000] INFO : Connect to DB as 'SAPCM1', connection_id=201064 43659 0.126722
4 ETW000 [ dev trc,00000] DB max. input host variables : 32767 6954 0.133676
4 ETW000 [ dev trc,00000] DB max. statement length : 1048576 34 0.133710
4 ETW000 [ dev trc,00000] DB max. array size : 100000 75 0.133785
4 ETW000 [ dev trc,00000] use decimal precision as length 21 0.133806
4 ETW000 [ dev trc,00000] ABAPVARCHARMODE is used 19 0.133825
4 ETW000 [ dev trc,00000] INFO : DBSL buffer size = 1048576 20 0.133845
4 ETW000 [ dev trc,00000] Command info enabled 19 0.133864
4 ETW000 [ dev trc,00000] Now I'm connected to HDB 18 0.133882
4 ETW000 [ dev trc,00000] 00: hanaserver-HDB/00, since=20140602134916, ABAP= <unknown> (0) 30 0.133912
4 ETW000 [ dev trc,00000] } DbSlHDBConnect(rc=0) 18 0.133930
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=30,arg_p=0000000000000000) 24 0.133954
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.133972
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=21,arg_p=000000000205F460) 22 0.133994
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.134012
4 ETW000 [ dev trc,00000] Connection 0 opened (DBSL handle 0) 36 0.134048
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=7,arg_p=000000000205F4B0) 25 0.134073
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 17 0.134090
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=63,arg_p=000000000205F2B0) 23 0.134113
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.134131
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=21,arg_p=000000000205F300) 12214 0.146345
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 32 0.146377
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=11,arg_p=000000000205F420) 26 0.146403
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.146421
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=22,arg_p=000000000205F390) 23 0.146444
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 37 0.146481
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=13,arg_p=000000000205F260) 29 0.146510
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.146528
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=24,arg_p=000000000205F210) 37 0.146565
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 35 0.146600
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=51,arg_p=000000000205F200) 40 0.146640
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=15) 31 0.146671
4 ETW000 [ dev trc,00000] { DbSlHDBPrepare(con_hdl=0,ss_p=000000000205F4E0,op=3,da_p=000000000205F540)
4 ETW000 46 0.146717
4 ETW000 [ dev trc,00000] -> buildSQLStmt(stmt_p=000000000205F4B0,da_p=000000000205F540,for_explain=0,lock=0,op=3)
4 ETW000 89 0.146806
4 ETW000 [ dev trc,00000] <- buildSQLStmt(len=27,op=3,#marker=0,#lob=0) 33 0.146839
4 ETW000 [ dev trc,00000] -> stmt_prepare(sc_hdl=0000000003AEAC40,ss_p=000000000205F4E0) 75 0.146914
4 ETW000 [ dev trc,00000] sc_p=0000000003AEAC40,no=0,idc_p=0000000000000000,con=0,act=0,slen=27,smax=256,#vars=0,stmt=000000000AD913E0,table=SVERS
4 ETW000 46 0.146960
4 ETW000 [ dev trc,00000] SELECT VERSION FROM SVERS ; 23 0.146983
4 ETW000 [ dev trc,00000] CURSOR C_0000 PREPARE on connection 0 21 0.147004
4 ETW000 [ dev trc,00000] } DbSlHDBPrepare(rc=0) 6174 0.153178
4 ETW000 [ dev trc,00000] { DbSlHDBRead(con_hdl=0,ss_p=000000000205F4E0,da_p=000000000205F540)
4 ETW000 53 0.153231
4 ETW000 [ dev trc,00000] ABAP USER is not set 25 0.153256
4 ETW000 [ dev trc,00000] -> activate_stmt(sc_hdl=0000000003AEAC40,da_p=000000000205F540) 25 0.153281
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEAC40,in_out=0,bulk=0,da_p=000000000205F540)
4 ETW000 30 0.153311
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=0,col_cnt=0) 21 0.153332
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEAC40,in_out=0,bulk=0,types=0000000000000000,#col=0,useBulkInsertWithLobs=0)
4 ETW000 54 0.153386
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=0,#int=0,#llong=0,#uc=0,rec_lng=0,db_lng=0
4 ETW000 33 0.153419
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=0, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=1)
4 ETW000 33 0.153452
4 ETW000 [ dev trc,00000] -> exec_modify(sc_hdl=0000000003AEAC40,ss_p=000000000205F4E0,bulk=0,in_out=1,da_p=000000000205F540)
4 ETW000 36 0.153488
4 ETW000 [ dev trc,00000] -> stmt_execute(sc_hdl=0000000003AEAC40,ss_p=000000000205F4E0,in_out=1,da_p=000000000205F540)
4 ETW000 95 0.153583
4 ETW000 [ dev trc,00000] OPEN CURSOR C_0000 on connection 0 28 0.153611
4 ETW000 [ dev trc,00000] CURSOR C_0000 SET InputSize=1 23 0.153634
4 ETW000 [ dev trc,00000] CURSOR C_0000 EXECUTE on connection 0 22 0.153656
4 ETW000 [ dev trc,00000] execute() of C_0000, #rec=0, rcSQL=0, rc=0 6404 0.160060
4 ETW000 [ dev trc,00000] CURSOR C_0000, rc=0,#rec=0,#dbcount=0 36 0.160096
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEAC40,in_out=1,bulk=0,da_p=000000000205F540)
4 ETW000 33 0.160129
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=1,col_cnt=1) 21 0.160150
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEAC40,in_out=1,bulk=0,types=000000000205F518,#col=1,useBulkInsertWithLobs=0)
4 ETW000 37 0.160187
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=0,#int=0,#llong=0,#uc=72,rec_lng=144,db_lng=144
4 ETW000 31 0.160218
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=144, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=1)
4 ETW000 31 0.160249
4 ETW000 [ dev trc,00000] -> allocIndicator(in_out=1,row_cnt=1) 21 0.160270
4 ETW000 [ dev trc,00000] -> allocData(in_out=1,size=1048576) 21 0.160291
4 ETW000 [ dev trc,00000] -> bind_type_and_length(sc_hdl=0000000003AEAC40,in_out=1,bulk=0,arr_size=1,types=000000000205F518,da_p=000000000205F540)
4 ETW000 45 0.160336
4 ETW000 [ dev trc,00000] -> exec_fetch(sc_hdl=0000000003AEAC40,bulk=0,da_p=000000000205F540)
4 ETW000 41 0.160377
4 ETW000 [ dev trc,00000] xcnt=1,row_i=0,row_pcnt=0 20 0.160397
4 ETW000 [ dev trc,00000] -> stmt_fetch(sc_hdl=0000000003AEAC40) 20 0.160417
4 ETW000 [ dev trc,00000] CURSOR C_0000 FETCH (xcnt=1) on connection 0 23 0.160440
4 ETW000 [ dev trc,00000] next() of C_0000, rc=0 27 0.160467
4 ETW000 [ dev trc,00000] fetch() of C_0000, #rec=1, rc=0, rcSQL=0 28 0.160495
4 ETW000 [ dev trc,00000] -> deactivate_stmt(sc_hdl=0000000003AEAC40,da_p=000000000205F540,rc=0)
4 ETW000 91 0.160586
4 ETW000 [ dev trc,00000] -> StmtCacheFree(DBSL:C_0000) 24 0.160610
4 ETW000 [ dev trc,00000] CURSOR C_0000 CLOSE resultset on connection 0 20 0.160630
4 ETW000 [ dev trc,00000] } DbSlHDBRead(rc=0) 34 0.160664
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=43,arg_p=00000001400FAB06) 25 0.160689
4 ETW000 [ dev trc,00000] INFO : SAP RELEASE (DB) = 740 19 0.160708
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 16 0.160724
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=41,arg_p=00000001400FAB98) 49 0.160773
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 19 0.160792
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=14,arg_p=0000000002055888) 22 0.160814
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.160832
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=50,arg_p=0000000002055880) 22 0.160854
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 26 0.160880
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=52,arg_p=00000000020558F0) 23 0.160903
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 17 0.160920
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=20,arg_p=0000000141FC74F0) 99 0.161019
4 ETW000 [ dev trc,00000] INFO : STMT SIZE = 1048576 21 0.161040
4 ETW000 [ dev trc,00000] INFO : MARKER_CNT = 32767 18 0.161058
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 19 0.161077
4 ETW000 [ dev trc,00000] NTAB: SELECT COMPCNT, UNICODELG FROM DDNTT WHERE TABNAME = 'SVERS'...
4 ETW000 38 0.161115
4 ETW000 [ dev trc,00000] { DbSlHDBPrepare(con_hdl=0,ss_p=0000000002055160,op=3,da_p=00000000020551B0)
4 ETW000 31 0.161146
4 ETW000 [ dev trc,00000] -> buildSQLStmt(stmt_p=0000000002055180,da_p=00000000020551B0,for_explain=0,lock=0,op=3)
4 ETW000 32 0.161178
4 ETW000 [ dev trc,00000] <- buildSQLStmt(len=63,op=3,#marker=0,#lob=0) 23 0.161201
4 ETW000 [ dev trc,00000] -> stmt_prepare(sc_hdl=0000000003AEACD8,ss_p=0000000002055160) 38 0.161239
4 ETW000 [ dev trc,00000] sc_p=0000000003AEACD8,no=1,idc_p=0000000000000000,con=0,act=0,slen=63,smax=256,#vars=0,stmt=000000000AE09690,table=DDNTT
4 ETW000 38 0.161277
4 ETW000 [ dev trc,00000] SELECT COMPCNT, UNICODELG FROM "DDNTT" WHERE TABNAME = 'SVERS' ; 21 0.161298
4 ETW000 [ dev trc,00000] CURSOR C_0001 PREPARE on connection 0 19 0.161317
4 ETW000 [ dev trc,00000] } DbSlHDBPrepare(rc=0) 6453 0.167770
4 ETW000 [ dev trc,00000] db_con_test_and_open: 1 open cursors (delta=1) 30 0.167800
4 ETW000 [ dev trc,00000] db_con_check_dirty: 1 open cursors, tx = NO , bc = NO 18 0.167818
4 ETW000 [ dev trc,00000] db_con_check_dirty: db_con_dirty = YES 16 0.167834
4 ETW000 [ dev trc,00000] { DbSlHDBBegRead(con_hdl=0,ss_p=0000000002055160,da_p=00000000020551B0)
4 ETW000 35 0.167869
4 ETW000 [ dev trc,00000] ABAP USER is not set 23 0.167892
4 ETW000 [ dev trc,00000] -> activate_stmt(sc_hdl=0000000003AEACD8,da_p=00000000020551B0) 23 0.167915
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEACD8,in_out=0,bulk=0,da_p=00000000020551B0)
4 ETW000 32 0.167947
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=0,col_cnt=0) 23 0.167970
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEACD8,in_out=0,bulk=0,types=0000000000000000,#col=0,useBulkInsertWithLobs=0)
4 ETW000 34 0.168004
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=0,#int=0,#llong=0,#uc=0,rec_lng=0,db_lng=0
4 ETW000 30 0.168034
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=0, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=1)
4 ETW000 31 0.168065
4 ETW000 [ dev trc,00000] -> exec_modify(sc_hdl=0000000003AEACD8,ss_p=0000000002055160,bulk=0,in_out=1,da_p=00000000020551B0)
4 ETW000 32 0.168097
4 ETW000 [ dev trc,00000] -> stmt_execute(sc_hdl=0000000003AEACD8,ss_p=0000000002055160,in_out=1,da_p=00000000020551B0)
4 ETW000 32 0.168129
4 ETW000 [ dev trc,00000] OPEN CURSOR C_0001 on connection 0 20 0.168149
4 ETW000 [ dev trc,00000] CURSOR C_0001 SET InputSize=1 19 0.168168
4 ETW000 [ dev trc,00000] CURSOR C_0001 EXECUTE on connection 0 20 0.168188
4 ETW000 [ dev trc,00000] execute() of C_0001, #rec=0, rcSQL=0, rc=0 5712 0.173900
4 ETW000 [ dev trc,00000] CURSOR C_0001, rc=0,#rec=0,#dbcount=0 34 0.173934
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEACD8,in_out=1,bulk=1,da_p=00000000020551B0)
4 ETW000 32 0.173966
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=1,col_cnt=2) 21 0.173987
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEACD8,in_out=1,bulk=1,types=0000000002055240,#col=2,useBulkInsertWithLobs=0)
4 ETW000 34 0.174021
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=2,#int=0,#llong=0,#uc=0,rec_lng=16,db_lng=4
4 ETW000 30 0.174051
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=16, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=65536)
4 ETW000 32 0.174083
4 ETW000 [ dev trc,00000] -> allocIndicator(in_out=1,row_cnt=65536) 20 0.174103
4 ETW000 [ dev trc,00000] -> allocData(in_out=1,size=1048576) 30 0.174133
4 ETW000 [ dev trc,00000] -> bind_type_and_length(sc_hdl=0000000003AEACD8,in_out=1,bulk=1,arr_size=65536,types=0000000002055240,da_p=00000000020551B0)
4 ETW000 36 0.174169
4 ETW000 [ dev trc,00000] } DbSlHDBBegRead(rc=0) 24 0.174193
4 ETW000 [ dev trc,00000] { DbSlHDBExeRead(con_hdl=0,ss_p=0000000002055160,da_p=00000000020551B0)
4 ETW000 35 0.174228
4 ETW000 [ dev trc,00000] ABAP USER is not set 20 0.174248
4 ETW000 [ dev trc,00000] -> exec_fetch(sc_hdl=0000000003AEACD8,bulk=0,da_p=00000000020551B0)
4 ETW000 33 0.174281
4 ETW000 [ dev trc,00000] xcnt=1,row_i=0,row_pcnt=0 20 0.174301
4 ETW000 [ dev trc,00000] -> stmt_fetch(sc_hdl=0000000003AEACD8) 20 0.174321
4 ETW000 [ dev trc,00000] CURSOR C_0001 FETCH (xcnt=1) on connection 0 20 0.174341
4 ETW000&Hi,
Could you check for SAP Note 1952701 - DBSL supports new HANA version number
Regards,
Gaurav -
Running Mac OSX 10.6.4 and using HP Laserjet 1022. When trying to print a .pdf opened by Adobe Reader I get ther above error. Document prints if opened with Preview. Any suggestions? Thanks.
I am trying to use ResultSet.TYPE_SCROLL_SENSITIVE and
CONCUR_UPDATABLE (using IBM DB2 JDBC 2.0) but I get
run time error :
SQL Exception.SQLState = null Error code = 0 Error
message = Updatable result set is not supported by
this version of the DB2 JDBC 2.0 driver.
Any suggestion or help is appreciated.
Has anyone heard that IBM DB2 implementation does not
support updatable and scroll_sensitive?It gave you a precise error message, and you don't believe the error message, thinking somehow you can get around it anyway, making it support updatable result sets...
You should check out this link:
http://www.hov-hov.dk/you.htm -
JDBC-ODBC Bridge to SPSS data files - Result Set Type is not supported
Hello,
As mentioned in the subject I am trying to read SPSS data files using the SPSS 32-Bit data driver, ODBC and the JDBC-ODBC Bridge.
Using this SPSS Driver I manged to read the data directly into an MS-SQL Server using:
SELECT [...] FROM
OPENROWSET(''MSDASQL.1'',''DRIVER={SPSS 32-BIT Data Driver (*.sav)};DBQ=' SomePathWhereTheFilesAre';SERVER=NotTheServer'', ''SELECT 'SomeSPSSColumn' FROM "'SomeSPSSFileNameWithoutExt'"'') AS a
This works fine!
Using Access and an ODBC System DNS works for IMPORTING but NOT for LINKING.
It is even possible to read the data using the very slow SPSS API.
However, when it comes to JDBC-ODBC the below code does only work in part. The driver is loaded successfully, but when it comes to transferring data into the resultset object the error
SQLState: null
Result Set Type is not supported
Vendor: 0
occurs.
The official answer from SPSS is to use .Net or to use their implementation with Python in their new version 14.0. But this is obviously not an option when you want to use only Java.
Does anybody have experience with SPSS and JDBC-ODBC??? I have tried the possible ResultSet Types, which I took from:
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/ad/rjvdsprp.htm
and none of them worked.
Thank you in advance for your ideas and input & stay happy!
Here the code without all the rest of the class arround it:
// Module: SimpleSelect.java
// Description: Test program for ODBC API interface. This java application
// will connect to a JDBC driver, issue a select statement
// and display all result columns and rows
// Product: JDBC to ODBC Bridge
// Author: Karl Moss
// Date: February, 1996
// Copyright: 1990-1996 INTERSOLV, Inc.
// This software contains confidential and proprietary
// information of INTERSOLV, Inc.
public static void main1() {
String url = "jdbc:odbc:SomeSystemDNS";
String query = "SELECT SomeSPSSColumn FROM 'SomeSPSSFileName'";
try {
// Load the jdbc-odbc bridge driver
Class.forName ("sun.jdbc.odbc.JdbcOdbcDriver");
DriverManager.setLogStream(System.out);
// Attempt to connect to a driver. Each one
// of the registered drivers will be loaded until
// one is found that can process this URL
Connection con = DriverManager.getConnection (url);
// If we were unable to connect, an exception
// would have been thrown. So, if we get here,
// we are successfully connected to the URL
// Check for, and display and warnings generated
// by the connect.
checkForWarning (con.getWarnings ());
// Get the DatabaseMetaData object and display
// some information about the connection
DatabaseMetaData dma = con.getMetaData ();
System.out.println("\nConnected to " + dma.getURL());
System.out.println("Driver " +
dma.getDriverName());
System.out.println("Version " +
dma.getDriverVersion());
System.out.println("");
// Create a Statement object so we can submit
// SQL statements to the driver
Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY ,ResultSet.CONCUR_READ_ONLY);
// Submit a query, creating a ResultSet object
ResultSet rs = stmt.executeQuery (query);
// Display all columns and rows from the result set
dispResultSet (rs);
// Close the result set
rs.close();
// Close the statement
stmt.close();
// Close the connection
con.close();
}Thank you for your reply StuDerby!
Actually the above script was before, as you suggested, leaving the ResultSetTeype default. This did not work...
I am getting gray hair with SPSS - in terms of connectivity and "integratebility" none of their solutions offered is sufficient from my point of view.
Variable definitions can only be read by the slow API, data can only be read by Python or Microsoft Products... and if you want to combine both you are in big trouble. I can only assume that this is a company strategy to sell their Dimensions Platform to companies versus having companies developping their applications according to business needs.
Thanks again for any furthur suggestions and I hope, that some SPSS Developper will see this post!
Cheers!! -
Backend System with Release 701 are not supported
SRM 5.0
Extended Classic Scenario
CCM 2.0
When updating the vendors in SRM system using TC - BBPUPDVD getting the below error message :-
Backend System with Release 701 are not supported
Have checked in table - BBP_BACKEND_DEST, the backend entry is properly maintained.
Can any one advice.
Regards,
Jayoti
Edited by: SAP jayoti on Dec 7, 2010 1:11 PMThe problem is resolved for vendor relication after applying the below notes 1313972 and 1372175.
ECC sandbox seems to be in 701 release. When the backend system ECC is in release 701, vendor replication fails with the error message that 'Backend system release 701 not supported'. We need to apply notes 1313972 and 1372175 to fix this issue. I have applied them in Sandbox and the issue is resolved now. -
EM10gR2 Grid Control / Change Management Pack does not support TYPE objects
I wonder any Oracle EM10gR2 Grid Control users, implementing Oracle Change Management Pack, have used it for the release and proper packages management.
Issue1 - On an EM console / Web GUI the DDL comparison and synchronization is not supported at all. Implementing it @ customer it appeared that we have to install a 10g Java Client in order to use the code comparison and sync functionalities.
Issue2 - and this is more serious - it appears that even with Java client in use, you cannot compare or sync objects related to TYPE or TYPE BODY data types. The comparison simply hangs.
Asked help from support, they mentioned an ER from 8i and logged a new ER for 10g/11g. That is surprising especially as the doc (http://download.oracle.com/docs/html/A96679_01/toc.htm) about code comparison does not exclude the TYPE objects from the supported options.
Any fellow DBA running to a same problem? What solutions & recommendations might you have?Hi Mugunthan,
Can you provide links to any tutorial or example (screenshots) of using Setup Manager to transfer something between two environments (say Users or DFFs)?
Thanks,
Gareth
Maybe you are looking for
-
Problem with opening folders in finder!
Hello everyone i just got my first mac two weeks ago and suddenly in the dock when i click the download section it shows me a preview of all the downloads i have in the downloads folder but when i click open downloads in finder "Open in Finder" it op
-
Predicate boolean method...calculating whether or not a year is a Leap year
I'm trying to get pointed in the right direction for this program. I am attempting to write a program that calculates whether or not an inputted year by the user is a valid leap year or not, however I'm having problems figuring out the method for thi
-
Dear all, Can we create the PO in FI? By the thing is client which is series org. didn't implemented MM. thet want issue the PO through FI to buy anything. is it possible? If it's posible, plz tell me the procedure. Thanks in advance, Regards sriniv
-
ASA 5525X with 9.1(2) IOS version Memory grow issue
Hi, So, finnaly i have installed two 5525X firewals in A/S failover. working fine, CPU is ok. memory behave very strange. it is growing day by day. i have a week already firewalls installed and the memory grew from 20 % to 51 % CPU is arrounf 20- 30
-
Can we get Multiple production versions in APO
Hi, Is there a way to get multiple production versions into APO, as the info record( we are using) is what drive the creation of the PDS in APO from ECC Is there a way to find out how we can have multiple production versions in APO. Thanks Maddy