EEM applet does not support extended ping ?
Hi does anybody know if EEM applet does not support any extended Ping in a cli action ?
This works:
action 1000 cli command "ping ip 10.161.255.5"
These statements does not work ?
action 1000 cli command "ping ip 10.161.255.5 repeat 10 source Loopback0"
action 1000 cli command "ping vrf TESTVRF ip 10.161.255.5 repeat 20 source Loopback0"
I found another possibility how to achive extended ping in EEM like following
event manager applet ping
event none sync yes
action 100 cli command "enable"
action 101 cli command "ping" pattern "[ip]"
action 102 cli command "ip" pattern "address"
action 103 cli command "10.161.255.5" pattern "count"
action 104 cli command "20" pattern "size"
action 105 cli command "100" pattern "seconds"
action 106 cli command "2" pattern "commands"
action 107 cli command "y" pattern "interface"
action 108 cli command "Loopback0" pattern "service"
action 109 cli command "0" pattern "header"
action 110 cli command "no" pattern "data"
action 111 cli command "no" pattern "pattern"
action 112 cli command "0xABCD" pattern "Verbose"
action 113 cli command " " pattern "size"
action 114 cli command "n" event manager applet ping
bu there is also no way to specify the VRF ?
Any Ideas ?
Thx
Hubert
Hi found failure,
extended ping needs to be executed in priviledged mode, so need to enable in advance , then it works
thx Bruno for the hint
Hubert
Similar Messages
-
"DBSL does not support extended connect protocol" while configuring SSFS
Hi, I'm trying to configure ssfs on ERP EHP7 on HANA Database system. Doing it with this guide - SSFS Implementation for Oracle Database
But when I'm trying to test connection with r3trans I got following error in the log:
4 ETW000 [ dev trc,00000] read_con_info_ssfs(): DBSL does not support extended connect protocol
4 ETW000 ==> ssfs won't be used 26 0.004936
I already updated DBSL_LIB to the latest version, but it doesn't help.
Here is full log:
4 ETW000 C:\usr\sap\CM1\DVEBMGS04\exe\R3trans.EXE version 6.24 (release 741 - 16.05.14 - 20:14:06).
4 ETW000 unicode enabled version
4 ETW000 ===============================================
4 ETW000
4 ETW000 date&time : 02.06.2014 - 13:49:16
4 ETW000 control file: <no ctrlfile>
4 ETW000 R3trans was called as follows: C:\usr\sap\CM1\DVEBMGS04\exe\R3trans.EXE -d
4 ETW000 trace at level 2 opened for a given file pointer
4 ETW000 [ dev trc,00000] Mon Jun 02 13:49:16 2014 106 0.000106
4 ETW000 [ dev trc,00000] db_con_init called 36 0.000142
4 ETW000 [ dev trc,00000] set_use_ext_con_info(): ssfs will be used to get connect information
4 ETW000 61 0.000203
4 ETW000 [ dev trc,00000] determine_block_commit: no con_hdl found as blocked for con_name = R/3
4 ETW000 26 0.000229
4 ETW000 [ dev trc,00000] create_con (con_name=R/3) 17 0.000246
4 ETW000 [ dev trc,00000] Loading DB library 'dbhdbslib.dll' ... 46 0.000292
4 ETW000 [ dev trc,00000] DlLoadLib success: LoadLibrary("dbhdbslib.dll"), hdl 0, count 1, addr 000007FEED100000
4 ETW000 3840 0.004132
4 ETW000 [ dev trc,00000] using "C:\usr\sap\CM1\DVEBMGS04\exe\dbhdbslib.dll" 21 0.004153
4 ETW000 [ dev trc,00000] Library 'dbhdbslib.dll' loaded 21 0.004174
4 ETW000 [ dev trc,00000] function DbSlExpFuns loaded from library dbhdbslib.dll 42 0.004216
4 ETW000 [ dev trc,00000] Version of 'dbhdbslib.dll' is "741.10", patchlevel (0.22) 81 0.004297
4 ETW000 [ dev trc,00000] function dsql_db_init loaded from library dbhdbslib.dll 25 0.004322
4 ETW000 [ dev trc,00000] function dbdd_exp_funs loaded from library dbhdbslib.dll 41 0.004363
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 47 0.004410
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=39,arg_p=0000000000000000) 24 0.004434
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.004452
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=10,arg_p=000000000205F170) 22 0.004474
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 17 0.004491
4 ETW000 [ dev trc,00000] New connection 0 created 17 0.004508
4 ETW000 [ dev trc,00000] 0: name = R/3, con_id = -000000001, state = DISCONNECTED, tx = NO , bc = NO , oc = 000, hc = NO , perm = YES, reco = NO , info = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO , prog =
4 ETW000 38 0.004546
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=10,arg_p=0000000141BAEDB0) 44 0.004590
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 19 0.004609
4 ETW000 [ dev trc,00000] db_con_connect (con_name=R/3) 19 0.004628
4 ETW000 [ dev trc,00000] determine_block_commit: no con_hdl found as blocked for con_name = R/3
4 ETW000 24 0.004652
4 ETW000 [ dev trc,00000] find_con_by_name found the following connection: 17 0.004669
4 ETW000 [ dev trc,00000] 0: name = R/3, con_id = 000000000, state = DISCONNECTED, tx = NO , bc = NO , oc = 000, hc = NO , perm = YES, reco = NO , info = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO , prog =
4 ETW000 164 0.004833
4 ETW000 [ dev trc,00000] read_con_info_ssfs(): reading connect info for connection R/3 34 0.004867
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=-1,command=74,arg_p=0000000000000000) 24 0.004891
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=15) 19 0.004910
4 ETW000 [ dev trc,00000] read_con_info_ssfs(): DBSL does not support extended connect protocol
4 ETW000 ==> ssfs won't be used 26 0.004936
4 ETW000 [ dev trc,00000] { DbSlHDBConnect(con_info_p=0000000000000000) 31 0.004967
4 ETW000 [ dev trc,00000] DBHDBSLIB : version 741.10, patch 0.022 (Make PL 0.26) 34 0.005001
4 ETW000 [ dev trc,00000] HDB shared library (dbhdbslib) patchlevels (last 10) 32 0.005033
4 ETW000 [ dev trc,00000] (0.022) Get database version via dbsl call (note 1976918) 24 0.005057
4 ETW000 [ dev trc,00000] (0.020) FDA: Core Dump in SELECT ... FOR ALL ENTRIES for tables with strings (note 1970276)
4 ETW000 32 0.005089
4 ETW000 [ dev trc,00000] (0.020) SQL DDL with data aging (note 1897636) 21 0.005110
4 ETW000 [ dev trc,00000] (0.017) Datatype NCLOB missing in tablesize calculation (note 1952609)
4 ETW000 30 0.005140
4 ETW000 [ dev trc,00000] (0.014) Tablesize calculation for HANA optimized (note 1952609) 25 0.005165
4 ETW000 [ dev trc,00000] (0.014) Native SQL UPSERT with DataAging (note 1897636) 21 0.005186
4 ETW000 [ dev trc,00000] (0.014) DBSL supports HANA revision number up to 3 digits (note 1952701)
4 ETW000 27 0.005213
4 ETW000 [ dev trc,00000] (0.010) Quotes missing by FAE with the hint dbsl_equi_join (note 1939234)
4 ETW000 28 0.005241
4 ETW000 [ dev trc,00000] (0.007) Obsere deactivate aging flag (note 1897636) 24 0.005265
4 ETW000 [ dev trc,00000] (0.007) Calculated record length for INSERT corrected (note 1897636)
4 ETW000 27 0.005292
4 ETW000 [ dev trc,00000] 15 0.005307
4 ETW000 [ dev trc,00000] -> init() 21 0.005328
4 ETW000 [ dev trc,00000] STATEMENT_CACHE_SIZE = 1000 181 0.005509
4 ETW000 [ dev trc,00000] -> init() 505 0.006014
4 ETW000 [ dev trc,00000] -> loadClientRuntime() 27 0.006041
4 ETW000 [ dev trc,00000] Loading SQLDBC client runtime ... 19 0.006060
4 ETW000 [ dev trc,00000] SQLDBC Module : C:\usr\sap\CM1\hdbclient\libSQLDBCHDB.dll 779 0.006839
4 ETW000 [ dev trc,00000] SQLDBC Runtime : libSQLDBCHDB 1.00.68 Build 0384084-1510 74 0.006913
4 ETW000 [ dev trc,00000] SQLDBC client runtime is 1.00.68.0384084 45 0.006958
4 ETW000 [ dev trc,00000] -> getNewConnection() 28 0.006986
4 ETW000 [ dev trc,00000] <- getNewConnection(con_hdl=0) 78 0.007064
4 ETW000 [ dev trc,00000] -> checkEnvironment(con_hdl=0) 34 0.007098
4 ETW000 [ dev trc,00000] -> connect(con_info_p=0000000000000000) 27 0.007125
4 ETW000 [ dev trc,00000] Try to connect via secure store (DEFAULT) on connection 0 ... 62 0.007187
4 ETW000 [ dev trc,00000] -> check_db_params(con_hdl=0) 61365 0.068552
4 ETW000 [ dev trc,00000] Attach to HDB : 1.00.68.384084 (NewDB100_REL) 7595 0.076147
4 ETW000 [ dev trc,00000] Database release is HDB 1.00.68.384084 49 0.076196
4 ETW000 [ dev trc,00000] INFO : Database 'HDB/00' instance is running on 'hanaserver' 6867 0.083063
4 ETW000 [ dev trc,00000] INFO : Connect to DB as 'SAPCM1', connection_id=201064 43659 0.126722
4 ETW000 [ dev trc,00000] DB max. input host variables : 32767 6954 0.133676
4 ETW000 [ dev trc,00000] DB max. statement length : 1048576 34 0.133710
4 ETW000 [ dev trc,00000] DB max. array size : 100000 75 0.133785
4 ETW000 [ dev trc,00000] use decimal precision as length 21 0.133806
4 ETW000 [ dev trc,00000] ABAPVARCHARMODE is used 19 0.133825
4 ETW000 [ dev trc,00000] INFO : DBSL buffer size = 1048576 20 0.133845
4 ETW000 [ dev trc,00000] Command info enabled 19 0.133864
4 ETW000 [ dev trc,00000] Now I'm connected to HDB 18 0.133882
4 ETW000 [ dev trc,00000] 00: hanaserver-HDB/00, since=20140602134916, ABAP= <unknown> (0) 30 0.133912
4 ETW000 [ dev trc,00000] } DbSlHDBConnect(rc=0) 18 0.133930
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=30,arg_p=0000000000000000) 24 0.133954
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.133972
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=21,arg_p=000000000205F460) 22 0.133994
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.134012
4 ETW000 [ dev trc,00000] Connection 0 opened (DBSL handle 0) 36 0.134048
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=7,arg_p=000000000205F4B0) 25 0.134073
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 17 0.134090
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=63,arg_p=000000000205F2B0) 23 0.134113
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.134131
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=21,arg_p=000000000205F300) 12214 0.146345
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 32 0.146377
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=11,arg_p=000000000205F420) 26 0.146403
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.146421
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=22,arg_p=000000000205F390) 23 0.146444
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 37 0.146481
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=13,arg_p=000000000205F260) 29 0.146510
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.146528
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=24,arg_p=000000000205F210) 37 0.146565
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 35 0.146600
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=51,arg_p=000000000205F200) 40 0.146640
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=15) 31 0.146671
4 ETW000 [ dev trc,00000] { DbSlHDBPrepare(con_hdl=0,ss_p=000000000205F4E0,op=3,da_p=000000000205F540)
4 ETW000 46 0.146717
4 ETW000 [ dev trc,00000] -> buildSQLStmt(stmt_p=000000000205F4B0,da_p=000000000205F540,for_explain=0,lock=0,op=3)
4 ETW000 89 0.146806
4 ETW000 [ dev trc,00000] <- buildSQLStmt(len=27,op=3,#marker=0,#lob=0) 33 0.146839
4 ETW000 [ dev trc,00000] -> stmt_prepare(sc_hdl=0000000003AEAC40,ss_p=000000000205F4E0) 75 0.146914
4 ETW000 [ dev trc,00000] sc_p=0000000003AEAC40,no=0,idc_p=0000000000000000,con=0,act=0,slen=27,smax=256,#vars=0,stmt=000000000AD913E0,table=SVERS
4 ETW000 46 0.146960
4 ETW000 [ dev trc,00000] SELECT VERSION FROM SVERS ; 23 0.146983
4 ETW000 [ dev trc,00000] CURSOR C_0000 PREPARE on connection 0 21 0.147004
4 ETW000 [ dev trc,00000] } DbSlHDBPrepare(rc=0) 6174 0.153178
4 ETW000 [ dev trc,00000] { DbSlHDBRead(con_hdl=0,ss_p=000000000205F4E0,da_p=000000000205F540)
4 ETW000 53 0.153231
4 ETW000 [ dev trc,00000] ABAP USER is not set 25 0.153256
4 ETW000 [ dev trc,00000] -> activate_stmt(sc_hdl=0000000003AEAC40,da_p=000000000205F540) 25 0.153281
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEAC40,in_out=0,bulk=0,da_p=000000000205F540)
4 ETW000 30 0.153311
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=0,col_cnt=0) 21 0.153332
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEAC40,in_out=0,bulk=0,types=0000000000000000,#col=0,useBulkInsertWithLobs=0)
4 ETW000 54 0.153386
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=0,#int=0,#llong=0,#uc=0,rec_lng=0,db_lng=0
4 ETW000 33 0.153419
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=0, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=1)
4 ETW000 33 0.153452
4 ETW000 [ dev trc,00000] -> exec_modify(sc_hdl=0000000003AEAC40,ss_p=000000000205F4E0,bulk=0,in_out=1,da_p=000000000205F540)
4 ETW000 36 0.153488
4 ETW000 [ dev trc,00000] -> stmt_execute(sc_hdl=0000000003AEAC40,ss_p=000000000205F4E0,in_out=1,da_p=000000000205F540)
4 ETW000 95 0.153583
4 ETW000 [ dev trc,00000] OPEN CURSOR C_0000 on connection 0 28 0.153611
4 ETW000 [ dev trc,00000] CURSOR C_0000 SET InputSize=1 23 0.153634
4 ETW000 [ dev trc,00000] CURSOR C_0000 EXECUTE on connection 0 22 0.153656
4 ETW000 [ dev trc,00000] execute() of C_0000, #rec=0, rcSQL=0, rc=0 6404 0.160060
4 ETW000 [ dev trc,00000] CURSOR C_0000, rc=0,#rec=0,#dbcount=0 36 0.160096
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEAC40,in_out=1,bulk=0,da_p=000000000205F540)
4 ETW000 33 0.160129
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=1,col_cnt=1) 21 0.160150
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEAC40,in_out=1,bulk=0,types=000000000205F518,#col=1,useBulkInsertWithLobs=0)
4 ETW000 37 0.160187
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=0,#int=0,#llong=0,#uc=72,rec_lng=144,db_lng=144
4 ETW000 31 0.160218
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=144, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=1)
4 ETW000 31 0.160249
4 ETW000 [ dev trc,00000] -> allocIndicator(in_out=1,row_cnt=1) 21 0.160270
4 ETW000 [ dev trc,00000] -> allocData(in_out=1,size=1048576) 21 0.160291
4 ETW000 [ dev trc,00000] -> bind_type_and_length(sc_hdl=0000000003AEAC40,in_out=1,bulk=0,arr_size=1,types=000000000205F518,da_p=000000000205F540)
4 ETW000 45 0.160336
4 ETW000 [ dev trc,00000] -> exec_fetch(sc_hdl=0000000003AEAC40,bulk=0,da_p=000000000205F540)
4 ETW000 41 0.160377
4 ETW000 [ dev trc,00000] xcnt=1,row_i=0,row_pcnt=0 20 0.160397
4 ETW000 [ dev trc,00000] -> stmt_fetch(sc_hdl=0000000003AEAC40) 20 0.160417
4 ETW000 [ dev trc,00000] CURSOR C_0000 FETCH (xcnt=1) on connection 0 23 0.160440
4 ETW000 [ dev trc,00000] next() of C_0000, rc=0 27 0.160467
4 ETW000 [ dev trc,00000] fetch() of C_0000, #rec=1, rc=0, rcSQL=0 28 0.160495
4 ETW000 [ dev trc,00000] -> deactivate_stmt(sc_hdl=0000000003AEAC40,da_p=000000000205F540,rc=0)
4 ETW000 91 0.160586
4 ETW000 [ dev trc,00000] -> StmtCacheFree(DBSL:C_0000) 24 0.160610
4 ETW000 [ dev trc,00000] CURSOR C_0000 CLOSE resultset on connection 0 20 0.160630
4 ETW000 [ dev trc,00000] } DbSlHDBRead(rc=0) 34 0.160664
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=43,arg_p=00000001400FAB06) 25 0.160689
4 ETW000 [ dev trc,00000] INFO : SAP RELEASE (DB) = 740 19 0.160708
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 16 0.160724
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=41,arg_p=00000001400FAB98) 49 0.160773
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 19 0.160792
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=14,arg_p=0000000002055888) 22 0.160814
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 18 0.160832
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=50,arg_p=0000000002055880) 22 0.160854
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 26 0.160880
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=52,arg_p=00000000020558F0) 23 0.160903
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 17 0.160920
4 ETW000 [ dev trc,00000] { DbSlHDBControl(con_hdl=0,command=20,arg_p=0000000141FC74F0) 99 0.161019
4 ETW000 [ dev trc,00000] INFO : STMT SIZE = 1048576 21 0.161040
4 ETW000 [ dev trc,00000] INFO : MARKER_CNT = 32767 18 0.161058
4 ETW000 [ dev trc,00000] } DbSlHDBControl(rc=0) 19 0.161077
4 ETW000 [ dev trc,00000] NTAB: SELECT COMPCNT, UNICODELG FROM DDNTT WHERE TABNAME = 'SVERS'...
4 ETW000 38 0.161115
4 ETW000 [ dev trc,00000] { DbSlHDBPrepare(con_hdl=0,ss_p=0000000002055160,op=3,da_p=00000000020551B0)
4 ETW000 31 0.161146
4 ETW000 [ dev trc,00000] -> buildSQLStmt(stmt_p=0000000002055180,da_p=00000000020551B0,for_explain=0,lock=0,op=3)
4 ETW000 32 0.161178
4 ETW000 [ dev trc,00000] <- buildSQLStmt(len=63,op=3,#marker=0,#lob=0) 23 0.161201
4 ETW000 [ dev trc,00000] -> stmt_prepare(sc_hdl=0000000003AEACD8,ss_p=0000000002055160) 38 0.161239
4 ETW000 [ dev trc,00000] sc_p=0000000003AEACD8,no=1,idc_p=0000000000000000,con=0,act=0,slen=63,smax=256,#vars=0,stmt=000000000AE09690,table=DDNTT
4 ETW000 38 0.161277
4 ETW000 [ dev trc,00000] SELECT COMPCNT, UNICODELG FROM "DDNTT" WHERE TABNAME = 'SVERS' ; 21 0.161298
4 ETW000 [ dev trc,00000] CURSOR C_0001 PREPARE on connection 0 19 0.161317
4 ETW000 [ dev trc,00000] } DbSlHDBPrepare(rc=0) 6453 0.167770
4 ETW000 [ dev trc,00000] db_con_test_and_open: 1 open cursors (delta=1) 30 0.167800
4 ETW000 [ dev trc,00000] db_con_check_dirty: 1 open cursors, tx = NO , bc = NO 18 0.167818
4 ETW000 [ dev trc,00000] db_con_check_dirty: db_con_dirty = YES 16 0.167834
4 ETW000 [ dev trc,00000] { DbSlHDBBegRead(con_hdl=0,ss_p=0000000002055160,da_p=00000000020551B0)
4 ETW000 35 0.167869
4 ETW000 [ dev trc,00000] ABAP USER is not set 23 0.167892
4 ETW000 [ dev trc,00000] -> activate_stmt(sc_hdl=0000000003AEACD8,da_p=00000000020551B0) 23 0.167915
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEACD8,in_out=0,bulk=0,da_p=00000000020551B0)
4 ETW000 32 0.167947
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=0,col_cnt=0) 23 0.167970
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEACD8,in_out=0,bulk=0,types=0000000000000000,#col=0,useBulkInsertWithLobs=0)
4 ETW000 34 0.168004
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=0,#int=0,#llong=0,#uc=0,rec_lng=0,db_lng=0
4 ETW000 30 0.168034
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=0, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=1)
4 ETW000 31 0.168065
4 ETW000 [ dev trc,00000] -> exec_modify(sc_hdl=0000000003AEACD8,ss_p=0000000002055160,bulk=0,in_out=1,da_p=00000000020551B0)
4 ETW000 32 0.168097
4 ETW000 [ dev trc,00000] -> stmt_execute(sc_hdl=0000000003AEACD8,ss_p=0000000002055160,in_out=1,da_p=00000000020551B0)
4 ETW000 32 0.168129
4 ETW000 [ dev trc,00000] OPEN CURSOR C_0001 on connection 0 20 0.168149
4 ETW000 [ dev trc,00000] CURSOR C_0001 SET InputSize=1 19 0.168168
4 ETW000 [ dev trc,00000] CURSOR C_0001 EXECUTE on connection 0 20 0.168188
4 ETW000 [ dev trc,00000] execute() of C_0001, #rec=0, rcSQL=0, rc=0 5712 0.173900
4 ETW000 [ dev trc,00000] CURSOR C_0001, rc=0,#rec=0,#dbcount=0 34 0.173934
4 ETW000 [ dev trc,00000] -> bind_variables(sc_hdl=0000000003AEACD8,in_out=1,bulk=1,da_p=00000000020551B0)
4 ETW000 32 0.173966
4 ETW000 [ dev trc,00000] -> allocParameter(in_out=1,col_cnt=2) 21 0.173987
4 ETW000 [ dev trc,00000] -> calculate_record_length(sc_hdl=0000000003AEACD8,in_out=1,bulk=1,types=0000000002055240,#col=2,useBulkInsertWithLobs=0)
4 ETW000 34 0.174021
4 ETW000 [ dev trc,00000] #float=0,#lob=0,itab=0,#short=2,#int=0,#llong=0,#uc=0,rec_lng=16,db_lng=4
4 ETW000 30 0.174051
4 ETW000 [ dev trc,00000] <- calculate_record_length(row_size=16, lob_cnt=0, lob_pw_cnt=0, long_cnt=0, ins_bulk_lob=0, row_max=65536)
4 ETW000 32 0.174083
4 ETW000 [ dev trc,00000] -> allocIndicator(in_out=1,row_cnt=65536) 20 0.174103
4 ETW000 [ dev trc,00000] -> allocData(in_out=1,size=1048576) 30 0.174133
4 ETW000 [ dev trc,00000] -> bind_type_and_length(sc_hdl=0000000003AEACD8,in_out=1,bulk=1,arr_size=65536,types=0000000002055240,da_p=00000000020551B0)
4 ETW000 36 0.174169
4 ETW000 [ dev trc,00000] } DbSlHDBBegRead(rc=0) 24 0.174193
4 ETW000 [ dev trc,00000] { DbSlHDBExeRead(con_hdl=0,ss_p=0000000002055160,da_p=00000000020551B0)
4 ETW000 35 0.174228
4 ETW000 [ dev trc,00000] ABAP USER is not set 20 0.174248
4 ETW000 [ dev trc,00000] -> exec_fetch(sc_hdl=0000000003AEACD8,bulk=0,da_p=00000000020551B0)
4 ETW000 33 0.174281
4 ETW000 [ dev trc,00000] xcnt=1,row_i=0,row_pcnt=0 20 0.174301
4 ETW000 [ dev trc,00000] -> stmt_fetch(sc_hdl=0000000003AEACD8) 20 0.174321
4 ETW000 [ dev trc,00000] CURSOR C_0001 FETCH (xcnt=1) on connection 0 20 0.174341
4 ETW000&Hi,
Could you check for SAP Note 1952701 - DBSL supports new HANA version number
Regards,
Gaurav -
My current Camera Raw 5.7 for Photoshop CS4 Extended does not support Canon EOS 6D. What can I do? Will there be a Photoshop Camera RAW for that?
You would need Camera Raw 7.3 or later to open those files. Adobe is not going to update CS4. You can either get a newer version of Photoshop that comes with a current version of Camera Raw, or you can use the DNG converter to save your files down to the ACR version that works with CS4.
-
"...does not support this type of alias."
I just reinstalled the OS 10.4 onto my G5 using the erase and install function. The drive is formatted as a Mac OS Extended (Journaled), which is an HFS+ format. I'm trying to copy some files onto the main hard drive from a DVD. I keep getting an error message that says "[the file name] cannot be copied to the destination, perhaps because the destination does not support this type of alias." I researched the message and the only thing I could find was that the error appears if the destination disk is formatted as a UFS disk. I thought maybe I made a mistake when I reinstalled the OS, so I tried to copy the same files to the secondary internal drive and to an external Firewire drive, and I got the same message both times.
What should I be looking for so that this will work? I'm going to post in a couple other topics, but any help would be greatly appreciated.
Thank you in advance,
LloydBarry,
I was trying to make back-up copies of my Adobe CS2 disks by copying the disks to the hard drive then burning them to CDs, but I got the error message when I dragged the contents from the original CD to the HD.
This worked, but it took a while: I Stuffed each CD to the HD, then unStuffed them and used Toast to burn a new CD. A few steps longer, but at least it worked.
Now, if I could only figure out why the second internal drive does not have the "Ignore Permissions" check box available, I'd be all set.
Thanks,
Lloyd -
Hello,
I start from the end and details show below - this error message i got in sql session:
SQL> select count(*) from EnergyType@ENERGOPLAN;
select count(*) from EnergyType@ENERGOPLAN
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[unixODBC][Driver Manager]Driver does not support this function {IM001}
ORA-02063: preceding 2 lines from ENERGOPLAN
SQL>
First question - is Oracle Heterogeneous Services are licensed for standard edition ? I cant find this information, and my database - is SE 11.2.0.3.0 - 64bit.
If its ok and HS are licensed for SE, then please see details of my problem:
----OS and packages version
[oracle@aris_sv_db log]$ uname -a
Linux aris_sv_db 2.6.18-308.24.1.el5 #1 SMP Tue Dec 4 17:43:34 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[oracle@aris_sv_db log]$
[oracle@aris_sv_db log]$ rpm -qa | grep odbc
[oracle@aris_sv_db log]$ rpm -qa | grep unixodbc
[oracle@aris_sv_db log]$ rpm -qa | grep unixODBC
unixODBC-libs-2.2.11-10.el5
unixODBC-libs-2.2.11-10.el5
unixODBC-devel-2.2.11-10.el5
unixODBC-2.2.11-10.el5
unixODBC-devel-2.2.11-10.el5
[oracle@aris_sv_db log]$ rpm -qa | grep freetds
freetds-0.91-1.el5.rf
[oracle@aris_sv_db log]$
-----ODBC.INI, ODBCINST.INI and FREETDS.CONF
[oracle@aris_sv_db log]$ more /home/oracle/.odbc.ini
[ENERGOPLAN]
Driver = FreeTDS
Servername = ENERGOPLAN
Database = ess2
[oracle@aris_sv_db log]$
[oracle@aris_sv_db log]$ more /etc/odbcinst.ini
# Example driver definitions
[FreeTDS]
Description = MSSQL Driver
Driver = /usr/lib64/libtdsodbc.so.0
#Setup = /usr/lib64/libtdsodbc.so.0
#Driver = /usr/lib64/libodbc.so
#Driver = /usr/lib/libodbc.so
UsageCount = 1
Trace = Yes
TraceFile = /tmp/freetds.log
[ODBC]
DEBUG = 1
TraceFile = /tmp/sqltrace.log
Trace = Yes
[oracle@aris_sv_db log]$
[oracle@aris_sv_db log]$ more /etc/freetds.conf
# A typical Microsoft server
[ENERGOPLAN]
host = 192.168.10.64
port = 1433
tds version = 8.0
# client charset = UTF-8
client charset = cp1251
[oracle@aris_sv_db log]$
----CHECK CONNECT from ODBC
[oracle@aris_sv_db log]$ isql -v ENERGOPLAN user pass
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
SQL> select count(*) from EnergyType;
| |
| 8 |
SQLRowCount returns 1
1 rows fetched
SQL> [oracle@aris_sv_db log]$ tsql -S ENERGOPLAN -U user -P pass
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "cp1251"
1> select count(*) from EnergyType;
2> go
8
(1 row affected)
1> [oracle@aris_sv_db log]$
----LISTENER.ORA, TNSNAMES and initENERGOPLAN.ora
[oracle@aris_sv_db log]$ more /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
SID_LIST_ENERGOPLAN =
(SID_LIST =
(SID_DESC=
(SID_NAME=ENERGOPLAN)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
(PROGRAM=dg4odbc)
(ENVS="LD_LIBRARY_PATH=/usr/lib64:/u01/app/oracle/product/11.2.0/dbhome_1/lib")
ENERGOPLAN =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = PNPKEY))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.72)(PORT = 1523))
ADR_BASE_LISTENER = /u01/app/oracle
[oracle@aris_sv_db log]$ more /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
ENERGOPLAN =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.72)(PORT=1523))
(CONNECT_DATA=(SID=ENERGOPLAN))
(HS=OK)
[oracle@aris_sv_db log]$ more /u01/app/oracle/product/11.2.0/dbhome_1/hs/admin/initENERGOPLAN.ora
# This is a sample agent init file that contains the HS parameters that are
# needed for the Database Gateway for ODBC
# HS init parameters
HS_FDS_CONNECT_INFO = ENERGOPLAN
#HS_FDS_CONNECT_INFO = 192.168.0.199:1433//test
HS_FDS_TRACE_LEVEL = DEBUG
#HS_FDS_TRACE_FILE_NAME = /tmp/hs1.log
HS_FDS_TRACE_FILE_NAME = /u01/app/oracle/product/11.2.0/dbhome_1/hs/log/mytrace.log
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so #/usr/lib64/libtdsodbc.so.0
#HS_FDS_SHAREABLE_NAME = /usr/lib64/libtdsodbc.so.0
#HS_FDS_SHAREABLE_NAME = /usr/lib/libodbc.so
#HS_LANGUAGE=american_america.we8iso8859p1
#HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8
#HS_LANGUAGE=AMERICAN_AMERICA.CL8MSWIN1251
#HS_LANGUAGE=RUSSIAN_RUSSIA.UTF8
#HS_LANGUAGE=Russian_CIS.AL32UTF-8
#HS_FDS_FETCH_ROWS=1
HS_NLS_NCHAR = UCS2
HS_FDS_SQLLEN_INTERPRETATION=32
# ODBC specific environment variables
set ODBCINI=/home/oracle/.odbc.ini
set ODBCINSTINI=/etc/odbcinst.ini
#HS_KEEP_REMOTE_COLUMN_SIZE=ALL
#HS_NLS_LENGTH_SEMANTICS=CHAR
#HS_FDS_SUPPORT_STATISTICS=FALSE
# Environment variables required for the non-Oracle system
#set <envvar>=<value>
[oracle@aris_sv_db log]$
[oracle@aris_sv_db log]$ tnsping ENERGOPLAN
TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 01-APR-2013 16:27:49
Copyright (c) 1997, 2011, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.10.72)(PORT=1523)) (CONNECT_DATA=(SID=ENERGOPLAN)) (HS=OK))
OK (0 msec)
[oracle@aris_sv_db log]$
----CREATE DBLINK and test from sqlplus
CREATE DATABASE LINK "ENERGOPLAN" CONNECT TO "user" IDENTIFIED BY "pass" USING 'ENERGOPLAN';
[oracle@aris_sv_db log]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 1 16:30:14 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
SQL> select count(*) from EnergyType@ENERGOPLAN;
select count(*) from EnergyType@ENERGOPLAN
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[unixODBC][Driver Manager]Driver does not support this function {IM001}
ORA-02063: preceding 2 lines from ENERGOPLAN
SQL>
----logs from hs and odbc
[oracle@aris_sv_db log]$ tail -50 ENERGOPLAN_agt_12117.trc
12 VARCHAR N 100 100 0/ 0 1000 0 200 ConsumptionYearCostUOM
3 DECIMAL N 24 24 9/ 3 0 0 0 ConsumptionYearFactorAmount
-7 BIT N 1 1 0/ 0 0 0 20 NeedToBeApprovedByREK
Exiting hgodtab, rc=0 at 2013/04/01-16:30:42
Entered hgodafr, cursor id 0 at 2013/04/01-16:30:42
Free hoada @ 0x14e5fd20
Exiting hgodafr, rc=0 at 2013/04/01-16:30:42
Entered hgopars, cursor id 1 at 2013/04/01-16:30:42
type:0
SQL text from hgopars, id=1, len=36 ...
00: 53454C45 43542043 4F554E54 282A2920 [SELECT COUNT(*) ]
10: 46524F4D 2022454E 45524759 54595045 [FROM "ENERGYTYPE]
20: 22204131 [" A1]
Exiting hgopars, rc=0 at 2013/04/01-16:30:42
Entered hgoopen, cursor id 1 at 2013/04/01-16:30:42
hgoopen, line 87: NO hoada to print
Deferred open until first fetch.
Exiting hgoopen, rc=0 at 2013/04/01-16:30:42
Entered hgodscr, cursor id 1 at 2013/04/01-16:30:42
Allocate hoada @ 0x14e5fd80
Entered hgodscr_process_sellist_description at 2013/04/01-16:30:42
Entered hgopcda at 2013/04/01-16:30:42
Column:1(): dtype:4 (INTEGER), prc/scl:10/0, nullbl:1, octet:0, sign:1, radix:0
Exiting hgopcda, rc=0 at 2013/04/01-16:30:42
Entered hgopoer at 2013/04/01-16:30:42
hgopoer, line 231: got native error 0 and sqlstate IM001; message follows...
[unixODBC][Driver Manager]Driver does not support this function {IM001}
Exiting hgopoer, rc=0 at 2013/04/01-16:30:42
hgodscr, line 407: calling SQLSetStmtAttr got sqlstate IM001
Free hoada @ 0x14e5fd80
hgodscr, line 464: NO hoada to print
Exiting hgodscr, rc=28500 at 2013/04/01-16:30:42 with error ptr FILE:hgodscr.c LINE:407 FUNCTION:hgodscr() ID:Set array fetch size
Entered hgoclse, cursor id 1 at 2013/04/01-16:31:24
Exiting hgoclse, rc=0 at 2013/04/01-16:31:24
Entered hgocomm at 2013/04/01-16:31:24
keepinfo:0, tflag:1
00: 4F52434C 2E343535 32623466 342E362E [ORCL.4552b4f4.6.]
10: 32322E37 363237 [22.7627]
tbid (len 20) is ...
00: 4F52434C 5B362E32 322E3736 32375D5B [ORCL[6.22.7627][]
10: 312E345D [1.4]]
cmt(0):
Entered hgocpctx at 2013/04/01-16:31:24
Exiting hgocpctx, rc=0 at 2013/04/01-16:31:24
Exiting hgocomm, rc=0 at 2013/04/01-16:31:24
Entered hgolgof at 2013/04/01-16:31:24
tflag:1
Exiting hgolgof, rc=0 at 2013/04/01-16:31:24
Entered hgoexit at 2013/04/01-16:31:24
Exiting hgoexit, rc=0
[oracle@aris_sv_db log]$
[oracle@aris_sv_db log]$ tail -50 /tmp/sqltrace.log
Native = 0x7fff6ca974f4
Message Text = 0x14e5f968
Buffer Length = 510
Text Len Ptr = 0x7fff6ca97750
[ODBC][12117][SQLGetDiagRecW.c][582]
Exit:[SQL_SUCCESS]
SQLState = IM001
Native = 0x7fff6ca974f4 -> 0
Message Text = [[unixODBC][Driver Manager]Driver does not support this function]
[ODBC][12117][SQLGetDiagRecW.c][540]
Entry:
Statement = 0x14e399f0
Rec Number = 2
SQLState = 0x7fff6ca97700
Native = 0x7fff6ca974f4
Message Text = 0x14e5f908
Buffer Length = 510
Text Len Ptr = 0x7fff6ca97750
[ODBC][12117][SQLGetDiagRecW.c][582]
Exit:[SQL_NO_DATA]
[ODBC][12117][SQLEndTran.c][315]
Entry:
Connection = 0x14dbd4b0
Completion Type = 0
[ODBC][12117][SQLGetInfo.c][214]
Entry:
Connection = 0x14dbd4b0
Info Type = SQL_CURSOR_COMMIT_BEHAVIOR (23)
Info Value = 0x7fff6ca9781e
Buffer Length = 8
StrLen = 0x7fff6ca9781c
[ODBC][12117][SQLGetInfo.c][528]
Exit:[SQL_SUCCESS]
[ODBC][12117][SQLEndTran.c][488]
Exit:[SQL_SUCCESS]
[ODBC][12117][SQLDisconnect.c][204]
Entry:
Connection = 0x14dbd4b0
[ODBC][12117][SQLDisconnect.c][341]
Exit:[SQL_SUCCESS]
[ODBC][12117][SQLFreeHandle.c][268]
Entry:
Handle Type = 2
Input Handle = 0x14dbd4b0
[ODBC][12117][SQLFreeHandle.c][317]
Exit:[SQL_SUCCESS]
[ODBC][12117][SQLFreeHandle.c][203]
Entry:
Handle Type = 1
Input Handle = 0x14dbb0c0
[oracle@aris_sv_db log]$To see which ODBC function DG4ODBC is looking for and unixODBC isn't supporting it would be best to get an ODBC trace file. But as your unixODBC driver (unixODBC-2.2.11-10.el5) is outdated and these old drivers had a lot of issues when being used on 64bit operating systems (for example wrong sizeofint etc). So best would be to update the unixODBC Driver manager to release 2.3.x. More details can be found on the web site: www.unicodbc.org
- Klaus -
Image cannot be rendered because Aperture does not support image format
Dear all,
I have installed apple Aperture 3.03 and the complete Nick software plug-in selection:
• Dfine 2.0 for Aperture
• Color Efex Pro 3.0 Complete for Aperture
• Silver Efex Pro for Aperture
• Sharpener Pro 3.0 for Aperture
• Viveza 2 for Aperture
History
Until a few days ago the system was running ok / no notable issues on performance. / all pug in seem to run good / no issues. Also
• I Run OSX10.4/ Aperture 3.03 ( 32Bit mode)
• My library contains just over 10000 images / approximately 140GB
• I have recently updated the OSX software update including the security update 2010-005
• as well as the Snow leopard graphic update 1.0
• As far as I am aware all updates where automatically recommended by OSX Software update ( no manual intervention).
• As far as I remember after update the system still was running ok but honestly I can not tell you how many times the Nick Plug in started out of Aperture.
After all, Several times I continued using the system / still no noticeable problem until the following was happening:
Problem:
After creating with "PT gui" a panoramic image File type: "TIF" Pixel size: 5112 × 2556 (13,1 MP) I imputed this file in to Aperture (drag and drop). During the attempt to eddied this file with "Nick Define 2.0" the system was hamming up No response for several minutes from Dfine 2.0 / Aperture" At the end I had to "force quit" the applications.
After the consequential reopening of Aperture I tried again to eddied images with any Nick software Plug-In but each time Aperture prepares any image previously stored in my Library ti open the Pug in the following Error message appears:
*"This image cannot be rendered for editing because Aperture does not support the image format"*
Currently non of my previously in the library safed images can be opened in a Nick software Plug-In this applies to all file types I have tied " Raw, tif, jpg"
• I am still able to eddied normally with Aperture ( so far I do not find any other issue)
• A newly imported Raw image after being taken with my EOS 5D can be edited in the completed Nick software Plug-In selection ( so far I do not find any other issue)
The following actions have been taken to overcome the issue (all not successful):
• I restored my Library out of my back Up ( to previous time of event ( no Time machine back up)
• Uninstallation of all Nick software Plug-In selection
• Uninstallation Aperture Reinstallation Aperture / Nick software Plug-In selection
• Repairing the library ( all three possibilities)
• installation of latest EOS utilitys
Questions
• Can you support me to overcome this issue?
• Have you heard similar issues of Nick user Running the Aperture plug-ins?
For me it is really strange that even after replacing the the library in my normally not connect back up the problem still exists (this should be not affected by the event as the back up was not done any more) / new imported images are editable with the Plug-In.
I would be happy if you could support me in this issue.
Best regards,
Matthias
PS: I have reported this issue to Apple (via Aperture feedback) as well as contacted the Nick software support and currently waiting for feedback.
Harware:
Modellname: MacBook Pro
Modell-Identifizierung: MacBookPro5,1
Prozessortyp: Intel Core 2 Duo
Prozessorgeschwindigkeit: 2,66 GHz
Anzahl der Prozessoren: 1
Gesamtzahl der Kerne: 2
L2-Cache: 6 MB
Speicher: 4 GB
Busgeschwindigkeit: 1,07 GHz
Boot-ROM-Version: MBP51.007E.B05
SMC-Version (System): 1.41f2Dear Ma-Le / All
I have just had the same problem
I use
• Aperture 3.03 with a iMac 2.8 Intel Core 2 Duo with all files on external hard disks (mac extended fomat as apple suggests) with referenced masters.
• Camera Nikon D300, with probably 75,000+ images on 2 hard drives
• Photoshop CS2 with PhotoTools 2.5 plugin
This has been working perfectly until yesterday when the system froze when I was using PhotoTools 2.5 with Photoshop CS2 - as a result of which I resorted to a force quit.
Since then on most of my files it has not been possible to use an external editor.
A message appears saying: *Editing Error - This image cannot be rendered for editing because Aperture does not support image format*.
The problem seems to apply to the attempted use of any external editor (including Noise Ninja)
The following have each been tried, all unsuccessfully:
- Using each of Aperture's library 3 first aid options
- Rebuilding directory using Diskwarrior
- Checking for virus using Virus Barrier X4
- Defragmenting library hard disk using TechTool pro
- Changing permissions settings
- Using Disk Utility first aid to repair permissions and checking main disk
- Removing some plist elements when open 'show package contents' of library
- Setting up a(n almost clean) new system, with newly loaded version of Aperture, with a new library from a vault saved prior to the crash when the problem first occured
Several things seem to me to be totally bizarre:
1 - The problem is the same on the other library hard disk which was not in use at the time
2 - The problem still occurs when a back up vault saved prior to this problem is loaded - using a new hard disk with a new system and a newly reloaded and upgraded Aperture software
3- The problem seems inconsistent. It appears to affect some photos but not all. Even from the same shoot, some photos can be edited using an external editor, whilst others cannot (but as far as i can tell most of the photos in a particular album seem to be consistently affected)
4- The only way around it seems to be if I import a new (copy image) from the original master. Then everything works ok, and I can successfully edit that copy image in photoshop / phototools plug-in.
I am beginning to wonder whether what has been corrupted is Aperture's ability to make copies from the master file which it then uses with the external editor (I have no real idea whether this is correct)
Does anyone have any ideas or solutions - or has anyone else been suffering a similar problem?
Eric
PS: As a professional photographer this problem is a really serious issue for me - and I really don't really want to go to Lightroom or Capture One -
I am getting the following error when attempting to INSERT the results of an "EXEC(@MDXQuery) at SSAS LinkedServer":
The requested operation could not be performed because OLE DB provider "MSOLAP" for linked server does not support the required transaction interface.
Here is code that illustrates what I am doing:
DECLARE @MDX varchar(max);
SET @MDX='
SELECT
[Measures].[Extended Service Count]
} ON COLUMNS,
NON EMPTY [Organization].[By Manufacturer].[Manufacturer]
ON ROWS
FROM (
SELECT
{[Organization].[Org Tree].&[2025],[Organization].[Org Tree].&[2040]} ON 0
FROM [MyCube]
/* Test 1 */
EXECUTE(@MDX) at SSASLinkedServer;
/* Test 2 */
DECLARE @ResultsB TABLE (
Manufacturer varchar(255)
, ExtendedServiceCount float
INSERT INTO @ResultsB (Manufacturer, ExtendedServiceCount) EXECUTE(@MDX) at SSASLinkedServer;
Test 1 succeeds, returning expected results, and Test 2 fails returning the error mentioned above.
Other articles I've found so far don't seem to apply to my case. I am not creating any explicit transactions in my code. When I use OPENQUERY, I am able to do the insert just fine, but not when I use EXEC @MDX at LinkedServer.
Unfortunately in some variations of the query, I run into the 8800 character limit on OPENQUERY, so I need to use this other approach.
Any ideas?
-Tab AllemanHi Tab,
In this case, SQL Server Analysis Services doesn’t support Distributed Transactions by design. Here is a similar thread about this issue for your reference, please see:
http://social.technet.microsoft.com/Forums/en-US/8b07be45-01b6-49d4-b773-9f441c0e44c9/olaplinked-server-error-msolap-for-linked-server-olaplinked-server-does-not-support-the?forum=sqlanalysisservices
One workaround is that use SQLCMD to execute the EXEC AT command and saved the results to a file, then import using SSIS.
If you have any feedback on our support, please click
here.
Regards,
Elvis Long
TechNet Community Support -
Macbook Pro Retina HDMI Does Not Support 2560 x 1080
I have both the early and late 2013 rMBP 15 inch but it seems like the early 2013 does not support my Dell U2913WM at maximum resolution of 2560 x 1080 for some reason on HDMI. Are the HDMI ports any different between the two? I'm tried using SwitchResX, but at 2560 x 1080, I can only go up to 53 Hz before it says invalid configuation on the early 2013 rMBP. On the late 2013 rMBP, it can output 2560 x 1080 at 60 Hz fine. It would seem like both graphics cards are more than capable of pushing this resolution, and if I connect through Displayport, they're both fine too. I was hoping to use HDMI though to free up a Thunderbolt port. Why are the HDMI ports not outputting at the same capabilities?
Absolutely, I understand that but I'm wondering if or why Apple has limited the HDMI port on the early 2013 rMBP. HDMI 1.4 should be more than capable at outputting up to 4k resolution, so it would seem like Apple is intentially limiting the early 2013 rMBP to HDMI 1.2? At least one post from as early as 2012 seems to suggest that installing Windows on a rMBP allows higher outputs. I thought Mavericks included support for HDMI 1.4 (http://www.reduser.net/forum/showthread.php?101431-10-9-Mavericks-4K-working-tes ting-it-now) but did they not extend it down to the early 2013 rMBP? Seems like it would be able to support it, no?
Perhaps the better question is whether anyone has been able to output anything at over 1080P on anything other than the latest rMBP after upgrading to Mavericks?
From the earlier discussion:
"The port definitely supports it, but OS X is another story. If I boot into Windows and connect it to a display capable of displaying 2560x1440 over HDMI, it just works. If I'm in OSX, it refuses to allow anything over 1920x1200. Even if I try to force the resolution with SwitchResX, it doesn't work. If I create a custom resolution with a 40Hz refresh rate instead of 60Hz, then it does actually work. 2560x1440 @ 40Hz fits within the bandwidth constraints of a single link at 165MHz. So it appears that OSX limits the port to 1.2 frequencies. I'm not realy sure why -- HDMI 1.4 should allow for frequencies upto 340MHz, and the hardware is clearly capable based on Windows."
Now that Mavericks supports higher than 1080P over HDMI at least on the latest rMBP, did they just not extend that support to earlier rMBP's even though the hardware is capable of it? -
Mavericks Server; smbd: File system does not support 0x0, time/size attrs
So I recently installed a Mac Mini with Mavericks and Server 3.2.2 in the main office of my company and everything has been going well, minus a few expected bugs, until recently. Starting last week we've been experiencing random intervals where File Sharing stopped working altogether, and I'd been able to reboot and get it running again. Unfortunately, as I'm the Systems and Server Admin, I've been too busy to look at the logs until now (following the degradation of one of the drives in our DAT-Optic RAID system, I figured it was time to make it a priority), and I'm seeing dozens of entries in the system logs regarding the process smbd with file system does not support errors.
Here's an example.
11/19/14 9:56:21.107 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.107 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.107 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.107 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.107 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.107 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.108 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.108 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.108 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.108 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.108 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.108 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.110 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.110 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.110 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.119 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.119 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.119 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.119 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.119 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.119 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.124 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.124 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.124 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.124 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.124 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.124 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.125 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.125 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.125 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.125 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.125 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.125 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.130 AM smbd[602]: File system does not support 0X40000, file attrs
11/19/14 9:56:21.130 AM smbd[602]: File system does not support 0X0 time attrs
11/19/14 9:56:21.130 AM smbd[602]: File system does not support 0X0, size attrs
11/19/14 9:56:21.131 AM smbd[602]: File system does not support 0X40000, file attrs
As far as I can tell, nothing else has changed in our system except for the fact that one of the drives died and was rebuilt as of yesterday (these smbd problems extend to last week shortly after I updated to 3.2.2 I believe). Any help would be much appreciated. I would normally first attempt a repair permissions to see if the issue is related to a bad plist somewhere but the last time we did that, our ACLs were duplicated across our file share and often incorrectly. I'm not sure if it's just me but the last Mac Server software I used was Snow Leopard Server (when it was still a full OS version) and it was infinitely more stable than what I've experienced thus far with Server 3. Anyway, I appreciate any help/advice that can be given and I apologize for the rant.Been getting the same errors for a while.
1/13/15 11:38:04.975 AM smbd[1112]: File system does not support 0X40000, file attrs
1/13/15 11:38:04.975 AM smbd[1112]: File system does not support 0X0 time attrs
1/13/15 11:38:04.975 AM smbd[1112]: File system does not support 0X0, size attrs
1/13/15 11:38:05.124 AM smbd[1112]: File system does not support 0X40000, file attrs
1/13/15 11:38:05.124 AM smbd[1112]: File system does not support 0X0 time attrs
1/13/15 11:38:05.124 AM smbd[1112]: File system does not support 0X0, size attrs
Believe they are related to random disconnects from the file server (10.9.5). -
Soundcard driver does not support DirectSound ?
Hi,
I just had to reinstall my OS, (Vista) Then I reinstalled AE CS3, now when I open AE I get a message saying that,
"The currently installed soundcard driver does not support DirectSound Input. Recording audio is not possible"
My soundcard is a 'High Definition Audio Device' made by Microsoft, it works fine and all the drivers are up to date.
Does anyone know anything about this problem or how to fix it ?
Thanks in advance.
J.Well, what driver did it use before you re-installed? A standard MS HD Audio device doesn't mean anything. It would install that standard driver on a 10 year old computer, as it's more or less an emulation device. You will have to install the correct chipset specific driver, which probably merely requires a manual initialization of Windows Update with extended options. likewise you should be able to find out what audio device is in your system e.g. by using SiSoft Sandra to probe it...
Mylenium -
My CS5 program does not support CR2 photo taken with Canon Rebel T5i. Why?
My CS5 program does not support CR2 photo taken with Canon Rebel T5i camera. Why?
If you take a look at the following charts, you will find that both the S110 and the T5i are not supported in CS5, both cameras require photoshop CS6 with a camera raw point upgrade, Photoshop CC or Lightroom 4.3(s110) 4.4(T5).
Camera Raw plug-in Supported cameras
http://helpx.adobe.com/creative-suite/kb/camera-raw-plug-supported-cameras.html
Photoshop CC indepth : camera raw - Supported cameras for plugin and lightroom
http://www.adobe.com/products/photoshop/extend.html
Camera Raw-compatable Adobe applications
http://helpx.adobe.com/x-productkb/global/camera-raw-compatible-applications.html -
My mac pro does not support boot camp (somehow)?
So for some reason every time I try and open the boot camp assistant I get this message "Boot camp assistant can not be used, This Mac does not support Boot Camp". I find this vary confusing as it should be able to (or so I assume as boot camp came pre installed). I don't Have a RAID set or even a RAID card for that matter. All the HD's are formatted to Mac OS Extended (Journaled). I'm not sure what else Im missing. I would consider using Parallels like on my macbook only that I'm trying to test out some rather graphic intensive games to see wether or not I want to build a gaming rig in the future, and virtual machines have this nasty habit of cutting my power in half.
Boot camp is 5.1.3
computer specs
Mac Pro (Mid 2010)
OSX 10.10.2
Processor: 2 x 2.66 GHz 6-Core Intel Xeon
Memory: 24 GB 1333 MHz DDR3 ECC
Graphics: NVIDIA GeForce GTX 680 2048 MB
4: 2 TB Hitachi HD's
Any help is much appreciated, thank youI do not know if this will help:
Boot Camp Assistant cannot be use
Also try posting in the BootCamp forum
Boot Camp -
How Java does not support multiple inheritance
Hi,
I have got a small doubt....
Generally it is said thatjava does not support multipleinheritance.I agree.
But then we know that every class in java by default is a sub class of Object class then how is it possible to inherit one more class using extends keyword.
I am confused?
AkshathaGenerally it is said thatjava does not support
multipleinheritance.I agree.Generaly, this is wrong. Java does support MI, just not MI of implementation. You can extend as many interfaces as you like.
But then we know that every class in java by default
is a sub class of Object class then how is it
possible to inherit one more class using extends
keyword.It is not possible. You can only extend directly from one single class. If you extend from something else than Object, you're not directly extending Object anymore. -
Cache "dist-test" does not support pass-through optimization
I have noticed this message in the log of my Extend proxy JVM. It is logged at INFO level.
The cache dist-test does not support pass-through optimization for objects in internal format. If possible, consider using a different cache topology.
The Extend proxy JVM is running as a storage disabled node of the cluster.
Any ideas what is causing it?
dist-test is configured like this:
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dist-*</cache-name>
<scheme-name>near-entitled-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>near-entitled-scheme</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>dist-default</scheme-ref>
</distributed-scheme>
</back-scheme>
</near-scheme>
<distributed-scheme>
<scheme-name>dist-default</scheme-name>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
<lease-granularity>member</lease-granularity>
<backing-map-scheme>
<local-scheme>
<listener>
<class-scheme>
<class-name>{backing-map-listener-class-name com.oracle.coherence.common.backingmaplisteners.NullBackingMapListener}</class-name>
<init-params>
<init-param>
<param-type>com.tangosol.net.BackingMapManagerContext</param-type>
<param-value>{manager-context}</param-value>
</init-param>
</init-params>
</class-scheme>
</listener>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>I presume it is something to do with the near-scheme because I do not see the message if I map dist-* caches directly to the dist-default scheme
Cheers,
JK.Hi Jonathan,
You are getting the warning because a near cache caches Objects. Since the the proxy service is using POF, it must deserialize the POF serialized value in order to put it in the near cache. You don't see the message when you cache directly to the dist-default scheme because that scheme is configured to use POF, which allows the proxy service to pass the POF serialized value directly through to the distributed cache service.
Thanks,
Tom -
ORA-26744: STREAMS capture process "STRING" does not support "STRING"
Hi All,
I have configured oracle streams using Note "How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]" at schema level
All the changes are getting reflected perfectly and was running smooth, but today suddenly I faced the below error and capture is aborted
ORA-26744: STREAMS capture process "STREAM_CAPTURE" does not support "AMSATMS_PAWS"."B_SEARCH_PREFERENCE" because of the following reason:
ORA-26783: Column data type not supported
Couple of suggestions on forum are to add a negative ruleset, please suggest me how do i add a negative rule set and if this is added to negative ruleset then how the changes to this table will reflect in target database...?
Please help me...
ThanksI do not have any idea why it treats your XMLTYPE stored as CLOB like a XMLTYPE binary. From the doc, we read :
http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/ap_restrictions.htm#BABGIFEA
Unsupported Data Types for Capture Processes
A capture process does not capture the results of DML changes to columns of the following data types:
* SecureFile CLOB, NCLOB, and BLOB
* BFILE
* ROWID
* User-defined types (including object types, REFs, varrays, and nested tables)
* XMLType stored object relationally or as binary XML <----------------------------
* The following Oracle-supplied types: Any types, URI types, spatial types, and media types
A capture process raises an error if it tries to create a row LCR for a DML change to a column of
an unsupported data type. When a capture process raises an error, it writes the LCR that caused
the error into its trace file, raises an ORA-26744 error, and becomes disabled. For your support
NOTE:556742.1 - Extended Datatype Support (EDS) for Streams
to exclude the table:
NOTE:239623.1 - How To Exclude A Table From Schema Capture And Replication When Using Schema Level Streams Replication
Sound like a specific patch. You did not stated which version of Oracle you are running.
Maybe you are looking for
-
Web Analysis report migrtation
Hi Experts, i need to migrate Web Analysis report from 7.x to 9.x here we are having reports on 7.x which is hosting on windows server and we want to migratedreports from 7.x to 9.x which is hosting on Linux, both the servers are residing on differ.
-
There are hundreds of Crash reports with my email address on it. I don't know what is causing it. There is no indication and help anywhere that has indicated it. Flash seemed to be problematic, so I tried uninstalling flash completely and using WITHO
-
Port 6800 is Shutting down for Reporting and Analysis Framework
I have successfully (based on no errors in the log) installed and configured ReportingAnalysis and FR in Wintel 2008 and 11.1.2 environment but when services are started port 6800 and starts but shuts down after few minutes and get Namespace Process
-
Fatal error when updating Payload with Java hw Worklist API
Hi all, I am receiving an error when I want to update some non-String-type fields of a task payload. I access the fields in the payload with facade-classes, generated by Schemac. The fact is that I can read all the payload fields, but when I try to s
-
Finder won't open??? All other applications seem to be working.
Finder was running very slow, usually a restart will remedy this. After a restart my desk top is completely empty. No folders, no disks. I can find files from spotlight. I can open applications. Verified and repaired all disks. I can't move anything