Restricting variable maintenance in non-development systems
Hello,
other then managing it via authorizations, does anyone know how you can restrict/protect the maintenance of variables in non-development systems? Maybe via config or system change option in the transport organizer tools? Thanks.
Regards,
Fairis
in ur non-dev system, rsa1 - transport connection - object changeability, for query element set not changeable..
*helpful answers deserve some points*
Similar Messages
-
Editing expressions in non-development systems.
Our first use of BRF+ met a large success among our business key users and they are requesting now the maintenance of some decision tables.
We therefore followed the guidelines of document "editing BRF+ component in non-development systems" and implemented interface IF_FDT_APPLICATION_SETTINGS.
All went fine on this part, but we hit major troubles with the authorization parts.
Due to the fact that our application was defined as 'customizing', we keep having message "System settings do not allow changes for customizing objects" and asks for authorization object S_TRANSPRT in production.
Having the application set to 'customizing' was not the best idea, as all objects created afterwards inherited automatically from this property.
Is there any way to force the status of specific expressions (not all at it would not make sense) from customizing to master data ?As Christian explained you may split your use case in two applications.
I anyway recommend for bigger use cass to put function and (shared) data objects into a system application and rules into a customizing application. In scenarios where you frequently need to change rules you can also use a master data application, not only customizing.
However, we have seen several issues with customers changing rules in the productive system (be it master data or be it customizing with the application exit). The problems are
missing checks for activation of changes (although customers could implement exist for any specific checks they want)
conflicts with imports (again, customers could implement custom code in exits to prevent some of the issues)
missing activation workflow (also here custom solutions may help)
times where BRFplus may fall back into interpretation mode because of changes happening in parallel
As a consequence I generally do not recommend changing rules in productive systems directly. Instead my recommendation is NW Decision Service Management which provides you all the tools to perform your changes, test them, deploy (with or without release workflow) and use them (always generated code).
- Features of DSM -
Is there a way to activate the whole application in Non Development system?
Hi All,
Is there a way to activate the whole application in Non Development system? Using some BRF Plus Tool.
We copied a sample application and customized the same as per our requirement. The same is then released to Test System for testing. On Test system this application with all component is in non-active state. We re activated the application with all the component and released it to Test System. But still the application is inactive.
Application is a of storage type system and so cannot use changeability exit to activate on test system.
TR log shows imported with error. Below is the extract of the error:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
BRF+: Runtime client: 000 / target client: 400
BRF+: Object activation of new version started for 418 object IDs
BRF+: <<< *** BEGIN: Content from Application Component CA *** >>>
BRF+: <<< BEGIN: Messages for IDs (53465BA36D8651B0E1008000AC11420B/ ) Table 'Dunning Proposal Line Items (Table)' >>>
No active version found for 23.04.2014 08:14:10 with timestamp
No active version found for IT_FKKMAVS with timestamp 23.04.2014 08:14:10
No active version found for IT_FKKMAVS with timestamp 23.04.2014 08:14:11
BRF+: <<< END : Messages for IDs (53465BA36D8651B0E1008000AC11420B/ ) Table 'Dunning Proposal Line Items (Table)' >>>
BRF+: <<< *** END : Content from Application Component CA *** >>>
BRF+: Object activation failed (step: Activate )
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
BRF+: Import queue update with RC 12
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Errors occurred during post-handling FDT_AFTER_IMPORT for FDT0001 T
FDT_AFTER_IMPORT belongs to package SFDT_TRANSPORT
The errors affect the following components:
BC-SRV-BR (BRFplus - ABAP-Based Business Rules)
Post-import method FDT_AFTER_IMPORT completed for FDT0001 T, date and time: 20140423011412
Post-import methods of change/transport request DE1K901989 completed
Start of subsequent processing ... 20140423011359
End of subsequent processing... 20140423011412
Any help would be appreciated.Is IT_FKKMAVS part of the same transport request or was it sent already earlier?
You may have a look at the request if it was OK. Probably not.
Maybe in the meantime more requests reached the system that now have in combination solved the problem. What is your release and support package level?
Higher versions of BRFplus have a lot of automatic correction mechanisms built into it.
E.g. problematic imports are collected in an import queue. As soon as a request comes in that fixes any problems the after import processing for faulty imports is automatically redone. -
Good day colleagues.
We are in the process of introducing retrofit, and the picture is clear for us, generally speaking, to retrofit the project landscape with the changes done in the maintenance landscape.
What about the other way around? We mean, when the project is set to Go Live, we understand the transports will be all added, e.g. to the Production System buffer to be moved there via Maintenance Cycle, based on the way we defined the project and the logical component being used.
However, how do you feed those project transports to your Maintenance Development System using ChaRM? or do you do that Manually? We doubt. Copying Production back into Development? no way !!!
How do you establish that? We have a sort of idea but the documents about retrofit only seem to talk about moving transports one way, as far as we have found.
Many thanks for any feedback.
Juan CarlosHi Piyush.
I apologize for a late close on this questioning about moving projects back into the maintenance stream.
Basically there are 2 solutions as far as we know:
1. What Vivek mentions, which is performing a cutover and repacking the project transports into 2 transports in the maintenance stream: A workbench and a customizing. In that sense, at the end of the day you end up moving just 2 transports to the maintenance stream up to Production. They contain all your project objects. Thanks to Vivek, again.
This is a very practical and interesting approach. The only reason we did not adopt it, is based on the fact that if by any chance we encounter an issue with a project transport object in the maintenance stream (Dev or QA), now that all is bundled together, we may be stuck right at the time weare getting ready to GoLive. How tough is going to be that issue? how easy and quick to fix?how much would that affect the whole project time frame? Those questions made us decide to option 2.
2. What we are doing is that at cutover we move at the same time all project transports to the transport buffer of each of the maintenance stream systems (Dev, QA, and Prod). We first open the gate to move the transports to Dev and we test, then to QA and we test, as well. If there may be an issue, and the issue can not be quickly resolved by the project team, we can go up to the extreme of using a new feature introduced in ChaRM in SP10, if we are not wrong, but definitely available in SP12. That feature provides a way to selectively decide which transports of the release are to Go Live and which ones do not, although we have no had to use that feature, yet, but it is there.
We do not see any risk on adding the transports to the maintenance buffers at the same time. There are ways to control the systems that are open for receiving transports, and the project phases, which guarantees no room for error. There have to be deliberate actions taken (more than one in our case), to wrongly move a project to GoLive before its time comes.
That is more or less the scenario Piyush.
Hope that explains the scenario. So far no decision on really publishing as a blog. It seems not to be written on stone, as consulting with different companies, each adds its own flavor to the recipe and shuffle ideas to get to what they are looking for and makes them happy.
Juan -
How to integrate the portal system with non-sap system
Hi Gurus,
How to integrate Portal system with non-SAP system?
I know few ways .......Using Usermapping UIDPW method.
Using Appintegrator .....and using Business repository objects in JCA?
Is there anyother way to integrate if so please give me the names and steps for integrating it?
Thanks in Advance,
DharaniHi Dharani,
You can get information from the following links:
http://help.sap.com/saphelp_nw04s/helpdata/en/43/d08b00d73001b4e10000000a11466f/frameset.htm
https://www.sdn.sap.com/irj/sdn/thread?threadID=744043
SAP CONNECTORS:- Basically Connectors are like middlewares , that we use to connect to the backend system including Non SAP systems also. Will try to explain it to u with some examples of SAP Connectors:-
a) SAP Business Connectors:-
A middleware application based on the B2B integration server from webMethods.
The SAP Business Connector enables both bi-directional synchronous communication and asynchronous communication between SAP applications and SAP and non-SAP applications.
The SAP Business Connector makes all SAP functions that are available via BAPIs or IDocs accessible to business partners over the Internet as an XML-based service.
The SAP Business Connector uses the Internet as a communication platform and XML or HTML as the data format. It integrates non-SAP products by using an open, non-proprietary technology.
b) SAP Java Connector:-
SAP Java Connector (SAP JCo) is a middleware component that enables the development of SAP-compatible components and applications in Java. SAP JCo supports communication with the SAP Server in both directions: inbound calls (Java calls ABAP) and outbound calls (ABAP calls Java).
SAP JCo can be implemented with Desktop applications and with Web server applications.
SAP JCo is used as an integrated component in the following applications:
1) SAP Business Connector, for communication with external Java applications
2) SAP Web Application Server, for connecting the integrated J2EE server with the ABAP environment.
SAP JCo can also be implemented as a standalone component, for example to establish communication with the SAP system for individual online (web) applications.
To Know more go through,
SAP Java Connectors
II) ALE Concept:-
ALE is not restricted to communication between SAP systems, it can also be used for connecting SAP Systems to non-SAP systems.
By using IDocs as universal information containers, ALE can reduce the number of different application interfaces to one single interface that can either send IDocs from an SAP system or receive IDocs in an SAP system.
SAP certified Translator Programs can convert IDoc structures into customer-defined structures.
Alternatively, the RFC interface for sending and receiving IDocs can be used in non-SAP systems.
In both cases you need the RFC Library of the RFC Software Development Kit (RFC-SDK).
This link gives a great insight into landscape for Connectivity to Non-SAP systems:-
SAP to Non-SAP systems
III) Communication Between SAP Systems and External (Non-SAP) Systems using RFC:-
When you use RFC for communication with an external (non-SAP) system, you can also implement the SAP Java Connector or the SAP .Net Connector for the conversion of data. However, there are no specific security requirements for these components, since they only perform internal system conversion functions.
The additional security recommendations for communication with external systems in this section make particular reference to cases where an external system is used as a server (SAP calls the external system). If you use an external system as a client (the external system calls SAP), the appropriate SAP-specific security mechanisms are implemented on the SAP side.
This link explains in detail all the security considerations you need to take for connecting to an External Non SAP system like, User administration, Network Security etc.
Communication Between SAP Systems and External (Non-SAP) Systems using RFC
Hope this helps,
Regards,
Rudradev Devulapalli
Reward the points if helpful -
I receive an error message (code -17600) while loading my test sequence after switching from LabVIEW Development System (2009 f3) to LabVIEW Run-TIme Engine using the Adapter Configuration.
ErrorCode: -17600,
Failed to load a required step's associated module.
LabVIEW Run-Time Engine version 9.0.1f3.
When I switch back to the LV development system, everything is OK, and the sequence loads and runs perfectly.
My TestStand Engine Version is 2012 f1 (5.0.0.262).
I'd appreciate any help on this issue.
RomanHi Roman,
There are a couple of things you can try:
1) Determine if the LabVIEW RunTime Engine is corrupted in some way. Create a new simple VI with no sub-VIs, using the same LabVIEW Development system you used for mass-compiling the VIs. Create a TestStand step that calls this VI and ensure it runs correctly. Now switch your LabVIEW adapter to use the RuntimeEngine and choose the "Auto detect using VI version" option.
Check if the simple VI is loadable and runs without errors in TestStand.
If the step generates the same error, you should try a re-install of the LabVIEW development system.
If not, its most likely that there is some VI you are using that is not loadable in the LabVIEW Runtime Engine because:
1) Some sub-VI is still not saved in the right version or bitness. Open the VI heirarchy of the top-level VI that you are calling from TestStand and examine the paths of all the sub-VIs to check if they were in the folder you masscompiled and re-save any that are outside this directory.
Also, when you try to close the top level VI, do you get a prompt to save any unsaved files? If so, they could be the sub-VIs that are not saved in the right version. Save all of them.
Check if you are loading any VIs programatically and if these are compiled and saved in the right version as well.
2) There is some feature you are using in your LabVIEW code that is not supported in the LabVIEW RunTime Engine. To check this, add your top-level VI to a LabVIEW project and create a new build specification and create a new executable from this VI.
Right-click "Build Specifications" and choose "New->Application(EXE)".
In the Application Properties window, select Source Files and choose the top level VI as the start-up VI.
Save the properties.
Right-click on the newly created build specification and choose Build.
Run this executable (it will be run using the LabVIEW RunTime) and check if the VI has a broken arrow indicating that it cannot be loaded and run in the LabVIEW Runtime Engine.
You might need to examine your code and find the feature which is not supported in the LabVIEW RunTime and find an alternative.
Another thing i forgot to mention the last time around is if you are using 64-bit LabVIEW with 32-bit TestStand, then executing code using LabVIEW RTE from TestStand will not work since the 64-bit LabVIEW RTE dll cannot be loaded by the 32-bit TestStand process.
If none of the above steps resolve the issue, consider sharing your LabVIEW code so i can take a look.
Regards,
TRJ -
I submitted this support request to NI on 8/12/2010.
When I compile my LV 8.6 app in LV2010 I get this error:
"LabVIEW 10.0 Development System has encountered a problem and needs to close. We are sorry for the inconvenience."
I was told to do a "Mass Compile" of my LV 8.6 app in LV2010...this failed too.
I was then told to go to each and every vi and "Mass Compile" individually...after about the 50th vi this got old quickly...and it still didn't compile. I then sent NI tech support "my code". The good news is my LV 8.6 app didn't compile with LV2010 @ NI.
My LV 8.6 app compiles and runs great in LV 8.6. I don't want to be left behind with the newer upgrades and I want to move to LV2010. I have lots of LV8.6 code to maintain and I really don't have the time to debug all of my apps.
I was told this will be looked @ in LV2010 SP1.
One note...back up your LV8.6 data before you move to LV2010. Once your LV8.6 code is compiled in LV2010 you will not be able to go back to LV8.6.
I restored all of my LV8.6 code and I'm back working with LV8.6.
It's a tough call, do I stay in LV8.6 and get left behind?
Do I bite the bullet and try to debug this mess in LV2010?
I was told the compiler is completely different in LV2010. That's great, but one reason I have NI Maintenance Agreement is to keep updated with the latest software. I can't afford to re-compile LV code every few years. Like most people, maintaining my Apps with customer's revisions, and modifications is enough work. I don't want more work!
I was told LV2010 SP1 would likely appear in May or June of 2011. I'd hate to break out my old Turbo Pascal apps again...but hey...they still work! My NI maintenance agreement is due this month too, I guess I'll pay NI one more year, and see if they come up with a solution. But if NI doesn't fix this LV8.6 compile in LV2010 problem...I don't see any value staying current with LV software.
I found another Bug with LV2010...you are going to love this one!
There is a new "LV Update Service". Perfect! I like updating my LV software when new patches are available. When I click "update" the update spins over and over "Checking for New Version". I have let it run ALL day with no results...just sits and spins over and over.
OK, I know give NI a break! Yes, LV2010 has a new compiler...and Yes, I will renew my NI maintenance agreement. I just want NI to know failing to compile just one LV8.6 app in LV2010 is not a good idea for customer relations.
Thanks,
DougFor your update service problem
Unable to Update Current Version of NI Update Service
Why am I Unable to Update My Version of NI Update Service in Windows Vista or Windows 7? -
SAP to External Non SAP Systems C++ Connections
Hi guys,
i should develop a C++ application that should transfer/receive data from an external Non SAP System to a SAP System (MM SD FI Modules) and viceversa,
This bidirectional integration should be synchronous and asynchronous, depending on the data flow type,
I was thinking to use IDocs for this communications,
I should be able to send purchase and sell orders requests from the Non Sap System to the Sap system and receive the result of SAP processes once the SAP transactions will be finished.
So i'll have a C++ process that send idocs (created with sdk end sent throught RFC to SAP) for SD or MM operations and a SAP Abap module that receive this idocs and start internal operations.
After this i need an internal SAP abap module that send to my external application Idoc's with the result of internal operations.
Are the idocs the common way to transfer data(low cadinality) in this scenario, or there is a better way?
Note: my sap system versions are previous to the netweaver release, so i can't use a Service Oriented comunication...
Thanks in advance!!!Hi,
try sending the data through BAPI Function Module.
and use the FM in your C++ program.
hope this works....
try
best of luck!!
thanks
ravi aswani -
ORA-28500: connection from ORACLE to a non-Oracle system
Hi, I need to connect to a OWB mysql database, but when making a query in sql plus sends me this error.
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[Generic Connectivity Using ODBC][H006] The init parameter
<HS_FDS_CONNECT_INFO> is not set. Please set it in init<orasid>.ora file.
ORA-02063: preceding 2 lines from MYSQLINK
listener.ora
# listener.ora Network Configuration File: C:\oraclebi\db\network\admin\listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\oraclebi\db)
(PROGRAM = extproc)
(SID_DESC =
(SID_NAME = MYSQL)
(ORACLE_HOME = C:\oraclebi\db)
(PROGRAM = hsodbc)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = bi.oratechla.com)(PORT = 1521))
tnsnames.ora
# tnsnames.ora Network Configuration File: C:\oraclebi\db\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
BISE1DB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = bi.oratechla.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = bise1db)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
OTCL_MORDOR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.31.210)(PORT = 1620))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = TEST)
MYSQL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = bi.oratechla.com)(PORT = 1521))
(CONNECT_DATA =
(SID = MYSQL))
(HS = OK)
inithMYSQL.ora
# This is a sample agent init file that contains the HS parameters that are
# needed for an ODBC Agent.
# HS init parameters
HS_FDS_CONNECT_INFO = MYSQL
HS_FDS_TRACE_LEVEL = off
# Environment variables required for the non-Oracle system
#set <envvar>=<value>
system dsn --> MYSQL
databaselink
CREATE PUBLIC DATABASE LINK mysqlink CONNECT TO "oracle" IDENTIFIED BY "oracle" using 'ejemplo';
Database link created.
select * from empleado@mysqlink;
ERROR at line 1:
ORA-12154: TNS:could not resolve the connect identifier specified
or
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[Generic Connectivity Using ODBC][H006] The init parameter
<HS_FDS_CONNECT_INFO> is not set. Please set it in init<orasid>.ora file.
ORA-02063: preceding 2 lines from MYSQLINK
tnsping
C:\Documents and Settings\Administrator>tnsping MYSQL
TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 06-AUG-2
010 06:31:57
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
C:\oraclebi\db\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = bi.orate
chla.com)(PORT = 1521)) (CONNECT_DATA = (SID = MYSQL)) (HS = OK))
OK (30 msec)Use the setup failing with
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[Generic Connectivity Using ODBC][H006] The init parameter
<HS_FDS_CONNECT_INFO> is not set. Please set it in init<orasid>.ora file.
ORA-02063: preceding 2 lines from MYSQLINK
There is a typo in the name of you gateway init file. You posted the content of inithMYSQL.ora but the naming is init<SID>.ora which is in your case initMYSQL.ora.
In addition please keep in mind HSODBC has been desupported since 2008 and when starting a new configuration you should use the follow up product dg4odbc (Database Gateway for ODBC) V11. -
Conversion Error in IDOC : Unicode to Non-Unicode System
EBP system (Unicode) posting goods movement to 4.6C (non-Unicode) using message type MBGMCR ( Function module - IDOC_INPUT_MBGMCR).
**In Non-Unicode System***
Idoc INBOUND error : Status 51
An error has occurred assigning data (E1BP2017_GM_ITEM_CREATE ).
I found in debugging value assigned from EDIDD ( E1BP2017_GM_ITEM_CREATE = IDOC_DATA-SDATA ) for field AMOUNT_LC is "0000 0.00"
system catch exception with CONVERSION_ERRORS
**Unicode System**
Value for Amount_lc is 0.00
Can anybody help me how to solve above issue?
regards,You can do something like this..
TABLES E1BP2017_GM_ITEM_CREATE .
E1BP2017_GM_ITEM_CREATE-AMOUNT_LC = <field value> .
MOVE E1BP2017_GM_ITEM_CREATE TO IDOC_DATA-SDATA .
can u tell me what is the type and value of the variable whose value you are assigning to AMOUNT_LC .
Reward points to all the helpful answers.
Thanks. -
Hello ,
am using shared variable from opc client in labview when am run a exe file at development system its working fine but when am running it in deployment system its not working am using same configuration file in opc server at development and deployment system error -1950679034 (0x8BBB0006) (Warning)First Root cause needs to be identified before any actions.
I would suggest first check if you can access the shared variable hosted in PC from RT using other ways like using SVE API (Logos and PS protocols, Datasocket etc..)
Check if antivirus or firewall is playing...
Check the same experiment with some other PC if you can.
You can also try creating another Shared Variable in RT and binding the same to the PC and try to access it...
Since you have did all the reinstallations already
Best Regards,
Vijay. -
Generate Change Pointers in non Original System
I have developped in the inbound badis a trigger to generate the change pointers in a non-original system.
Do you know if there is a standard way to generate change pointers in a non-original system?
Landscape is as follow:
Original System => Intermediate System => Final System
I would like to generate change pointers in the Intermediate System to transfer master data into the Final System...Hi,
Thanks for your reply.
the configuration for change pointers is already done, the problem is that if changes are send from the original system, it doesn't generate a change pointer in the intermediate system.
In the end, the intermediate system doesn't send the information to the final system as there are no change pointers generated by the changes received from the original system.
What I'm doing at the moment is to generate those change pointers in the inbound badi in the intermadiate system, I'm wondering if there is not a standard solution for this kind of implementation.
Cheers,
Lucien. -
Carry out repairs in non-original system only if they are urgent
Hi Experts,
I am getting this message "Carry out repairs in non-original system only if they are urgent" while i try to edit a z function module which i have created in the development system. Because of this I can't change the code, I need to use the insert, delete....buttons.
Please help , itz urgent.
Thanks & Regards,
Soumya.Hello Soumya
Obviously the Z-function module was originally created on another development system. Therefore you get this message.
If you have transferred your function group to another development system then simply change the source system of the function group. You can do this using transaction SE03 (function "Change Object Entries...") or you can use function module TRINT_TADIR_MODIFY.
Regards,
Uwe -
ORA-28500: connection from ORACLE to a non-Oracle system returned this
Hi ALL<
Please assist i am doing heterogeneous connectivity as oracle to sql server the below error encounterd after created db link.
SQL> select * from SqlTest@jadoo;
select * from SqlTest@jadoo
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
ORA-28541: Error in HS init file on line 20.
ORA-02063: preceding 2 lines from JADOOHi,
my dg4odbc.ora is as below:
$ cat initdg4odbc.ora
# This is a sample agent init file that contains the HS parameters that are
# needed for the Database Gateway for ODBC
# HS init parameters
HS_FDS_CONNECT_INFO = mssqlserver
HS_FDS_TRACE_LEVEL = off
HS_FDS_SHAREABLE_NAME = /usr/opt/ibm/WSII/odbc/ddl/lib/FRmsss21.so
# ODBC specific environment variables
set ODBCINI=/usr/opt/ibm/WSII/odbc/ddl/odbc.ini
# Environment variables required for the non-Oracle system
set <envvar>=<value>
$
MY odbc.ini file is as :
$ cat odbc.ini
[ODBC Data Sources]
DB2 Wire Protocol=DataDirect 5.1 DB2 Wire Protocol
Informix Wire Protocol=DataDirect 5.1 Informix Wire Protocol
Oracle Wire Protocol=DataDirect 5.0 Oracle Wire Protocol
Oracle=DataDirect 5.1 Oracle
SQLServer Wire Protocol=DataDirect 5.1 SQL Server Wire Protocol
Sybase Wire Protocol=DataDirect 5.1 Sybase Wire Protocol
mssqlserver=MS SQL Server 2008
[DB2 Wire Protocol]
Driver=/home2/alesio/odbc64v51/lib/dddb221.so
Description=DataDirect 5.1 DB2 Wire Protocol
AddStringToCreateTable=
AlternateID=
AlternateServers=
ApplicationUsingThreads=1
CatalogSchema=
CharsetFor65535=0
#Collection applies to OS/390 and AS/400 only
Collection=
ConnectionRetryCount=0
ConnectionRetryDelay=3
#Database applies to DB2 UDB only
Database=<database_name>
DynamicSections=200
GrantAuthid=PUBLIC
GrantExecute=1
IpAddress=<DB2_server_host>
LoadBalancing=0
#Location applies to OS/390 and AS/400 only
Location=<location_name>
LogonID=
Password=
PackageOwner=
ReportCodePageConversionErrors=0
SecurityMechanism=0
TcpPort=<DB2_server_port>
UseCurrentSchema=1
WithHold=1
[Informix Wire Protocol]
Driver=/home2/alesio/odbc64v51/lib/ddifcl21.so
Description=DataDirect 5.1 Informix Wire Protocol
AlternateServers=
ApplicationUsingThreads=1
CancelDetectInterval=0
ConnectionRetryCount=0
ConnectionRetryDelay=3
Database=<database_name>
HostName=<Informix_host>
LoadBalancing=0
LogonID=
Password=
PortNumber=<Informix_server_port>
ReportCodePageConversionErrors=0
ServerName=<Informix_server>
TrimBlankFromIndexName=1
[Oracle Wire Protocol]
Driver=/home2/alesio/odbc64v51/lib/ddora21.so
Description=DataDirect 5.1 Oracle Wire Protocol
AlternateServers=
ApplicationUsingThreads=1
ArraySize=60000
CachedCursorLimit=32
CachedDescLimit=0
CatalogIncludesSynonyms=1
CatalogOptions=0
ConnectionRetryCount=0
ConnectionRetryDelay=3
DefaultLongDataBuffLen=1024
DescribeAtPrepare=0
EnableDescribeParam=0
EnableNcharSupport=0
EnableScrollableCursors=1
EnableStaticCursorsForLongData=0
EnableTimestampWithTimeZone=0
HostName=<Oracle_server>
LoadBalancing=0
LocalTimeZoneOffset=
LockTimeOut=-1
LogonID=
Password=
PortNumber=1521
ProcedureRetResults=0
ReportCodePageConversionErrors=0
ServiceType=0
ServiceName=
SID=<Oracle_SID>
TimeEscapeMapping=0
UseCurrentSchema=1
[Oracle]
Driver=/home2/alesio/odbc64v51/lib/ddor821.so
Description=DataDirect 5.1 Oracle
AlternateServers=
ApplicationUsingThreads=1
ArraySize=60000
CatalogIncludesSynonyms=1
CatalogOptions=0
ClientVersion=9iR2
ConnectionRetryCount=0
ConnectionRetryDelay=3
DefaultLongDataBuffLen=1024
DescribeAtPrepare=0
EnableDescribeParam=0
EnableNcharSupport=0
EnableScrollableCursors=1
EnableStaticCursorsForLongData=0
EnableTimestampWithTimeZone=0
LoadBalancing=0
LocalTimeZoneOffset=
LockTimeOut=-1
LogonID=
OptimizeLongPerformance=0
Password=
ProcedureRetResults=0
ReportCodePageConversionErrors=0
ServerName=<Oracle_server>
TimestampEscapeMapping=0
UseCurrentSchema=1
[SQLServer Wire Protocol]
Driver=/home2/alesio/odbc64v51/lib/ddmsss21.so
Description=DataDirect 5.1 SQL Server Wire Protocol
Address=<SQLServer_host, SQLServer_server_port>
AlternateServers=
AnsiNPW=Yes
ConnectionRetryCount=0
ConnectionRetryDelay=3
Database=<database_name>
LoadBalancing=0
LogonID=
Password=
QuotedId=No
ReportCodePageConversionErrors=0
[Sybase Wire Protocol]
Driver=/home2/alesio/odbc64v51/lib/ddase21.so
Description=DataDirect 5.1 Sybase Wire Protocol
AlternateServers=
ApplicationName=
ApplicationUsingThreads=1
ArraySize=50
Charset=
ConnectionRetryCount=0
ConnectionRetryDelay=3
CursorCacheSize=1
Database=<database_name>
DefaultLongDataBuffLen=1024
EnableDescribeParam=0
EnableQuotedIdentifiers=0
InitializationString=
Language=
LoadBalancing=0
LogonID=
NetworkAddress=<Sybase_host, Sybase_server_port>
OptimizePrepare=1
PacketSize=0
Password=
RaiseErrorPositionBehavior=0
ReportCodePageConversionErrors=0
SelectMethod=0
TruncateTimeTypeFractions=0
WorkStationID=
[mssqlserver]
Driver=/usr/opt/ibm/WSII/odbc/ddl/lib/FRmsss21.so
Description=MS SQL Server Driver for AIX
Database=TraceTest
LogonID=TraceAdmin
Password=sqltest@123$
Address=10.10.1.92\MSSQL2008
QuotedId=YES
QEWSD=41143
AnsiNPW=YES
[ODBC]
IANAAppCodePage=4
InstallDir=/usr/opt/ibm/WSII/odbc/ddl
Trace=0
TraceDll=/usr/opt/ibm/WSII/odbc/dd/lib/odbctrac.so
TraceFile=odbctrace.out
UseCursorLib=0
$
Please advise.
Best Regards,
Edited by: user13707876 on 23-Aug-2012 04:54 -
Building Installer Crashes Developement System LV2014SP1
Dear Community
I have again a major problem which can not be reproduced up to now.
Hopefully somebody has an idea.
I have a large project (including some LV classes, dll Calls (Hdf5), and Network shared Variables) which was fine since about one month ago.
Now I did some work on the project: The building of the application still works fine but then ************************
when I want to create the installer of the application (not depending on the additional installers I use) the whole developement system crashes completely.
The last line of the application builder progress window shows:
Adding file:
Labview Elemental IO-error.txt
It seems when it trys to add this file the system crashes and I am not able to build the installer.
This is a major problem because we need to roll out the new version to the costumer.
Hope somebody has an idea what to test next (I already did intense testing even on a different computer with the same project **so even the developement system installation may not be corrupt**)
Eventually there may be a problem with my 'alway include' files but I dont know where and w.
Hope you have some idea
Thanks
NottilieI think I found the problem and unfortunately it seems to be related to the Viewpoint TSVN Toolkit.
See the following log from the crash report:
<DEBUG_OUTPUT>
6/24/2015 5:48:18.059 PM
DWarn 0x50CBD7C1: Got corruption with error 1097 calling library mxLvProvider.mxx function mxLvApi_SetIconOverlaysBatch
e:\builds\penguin\labview\branches\2014patch\dev\source\execsupp\ExtFuncRunTime.cpp(247) : DWarn 0x50CBD7C1: Got corruption with error 1097 calling library mxLvProvider.mxx function mxLvApi_SetIconOverlaysBatch
minidump id: acdc1a8d-51cf-450c-8d63-fbc10cdecd70
$Id: //labview/branches/2014patch/dev/source/execsupp/ExtFuncRunTime.cpp#1 $
What does creating a LabVIEW installer have to do with icon overlays? I have no idea, but I know something that is LabVIEW related that uses icon overlays in the project – the Viewpoint TSVN Toolkit! I promptly uninstalled the toolkit and I was able to build all 4 of my installers without a hitch multiple times. Additionally, I’ve noticed that LabVIEW is much more responsive and launch time has been cut from ~60sec to ~20sec.
Although this seemed to have fixed the problem (I tested on two machines both exhibiting the same behavior and both having the toolkit installed) I am dissapointed that I no longer have the TSVN toolkit because it was extremely useful.
I recently upgraded to the latest 1.8.2.23 version of the TSVN toolkit, I'm going to instead install the previous version(s) until I see the problem go away (hopefully.)
Does anybody here use the latest TSVN toolkit and have zero issues building an installer that has an app that uses shared variables? I'm not sure if the shared variables part is relevant but it might be.
Maybe you are looking for
-
How do I control what page opens when I click "new tab" in FF 3.6.17?
I used to get a blank page when I clicked on "new tab" but now I have started getting a yahoo page. I would like to go back to a blank page where I can enter the new url I want to open.
-
CC will not update to the most current version. Is my OS to old?
I'm was attempting to setup an AE render farm on 2 machines running OSX 10.7.5. Each machine has hit the wall of being able to update the OS anymore. The version of AE render engine is 12.1. My newer machine is running Mavericks and has the most rese
-
Revision: 16967 Revision: 16967 Author: [email protected] Date: 2010-07-19 00:41:00 -0700 (Mon, 19 Jul 2010) Log Message: Bug: BLZ-549 - cyclic dependencies between core and remoting projects QA: Yes Doc: No Checkintess: Pass Details: -Removed
-
9i;JSP; GridControl simulation?
Hi, I know there is no tag available to render a GridControl in JSP page. But I’d like to simulate this in JSP. Would appreciate to let me know, Which available tags I could use and which part I have to simulate (using Java Beans,…) and any command t
-
I originally got a folder with question mark. Then I did command+option+r at same time as turning on comp. I did internet recovery then got to screen MAC OS X Utilities. I tried reinstalling OS X Mountain LIon but there was no disk to reinstall it