Isolate OMBPlus environment

Hello,
I was wondering if there was any way of isolating the OMBplus environment from the total installation package of Warehouse Builder (9.2).
I'm asking this because I've written a TCL script to deploy objects from the design repository saved through "Deploy To File" (XML format) to their respective runtime repository's.
For example, in MY_PROJECT I have a table TAB01 in the module MODULE01. It's no problem to deploy the TAB01 to my development environment and it's corresponding runtime repository through Warehouse Builer, but I also have a Test (T), Acceptance (A) and Production (P) environment in which I can not deploy through Warehouse Builder. This has to be done by the DBA.
Enter my TCL script. I deploy TAB01 to TAB01.XML and the TCL script helps the DBA to deploy the TAB01 to the T, A and P runtime environments. Therefore the only thing the DBA needs is the OMBplus environment, without going through the hassle of installing all of OWB.
So, my question is: "Is it possible to only install OMBplus for this DBA?"
Regards,
Rob

In any current production version of OWB I have not seen anything done like what you are attempting. In the Paris Beta, the capability exists to do much of what you are looking for. The repository, and there is only one, does not need to be where your target is.
Some info from the class:
Multi-Config – Why?
Have a single design for objects
Change the generated code per database
Without much work
In a controlled way
Multi-Config – How?
Separate logical from physical
Substitute physical properties into generated code per database
Put code on its targeted database
The OWB Paris Experts can be extended. OWB is based on Jacl, Java Implementation of Tcl
Allows for command line execution without design connection
Light weight implementation when run on the server (Linux and Windows)
Paris only has the client on Linux and Windows (as of the Train the Trainer class I attended), so you will not have complete installations on every platform.
As for 9.2 installations, I recommend you look at upgrading soon to at least 10.1.0.4. The base code for 10.1.0.4 is 9.2.0.2.8, but many patches and infrastructure fixes are what required a complete install for 10.2 database certification. 10.1.0.4 is 9.2.0.2.8 plus patches (9.2.0.4) plus the 10.1 infrastructure and additional bug fixes to make 10.1.0.2, plus additional bug fixes and the infrastructure for 10.2 databases to make the version 10.1.0.4.
OWB 10.1.0.4.0 for Windows is released and available onMetalink as patchset:4703215
Use patchset 4703215 for all platforms EXCEPT Windows.
4703215 Warehouse Builder: Patchset
ORACLE 10G WAREHOUSE BUILDER 10.1.0.4 FULL RELEASE FOR WINDOWS 10.1.0.4
23-NOV-2005
509M
Patchset 4703215
Description ORACLE 10G WAREHOUSE BUILDER PATCH 10.1.0.4 FOR WINDOWS
Product Warehouse Builder
Release Oracle 10.1.0.2
Platform or Language -------- Platforms --------Microsoft Windows (32-bit)
Size 528M (554599541 bytes)
From OTN, 10.1.0.4 is available at:
http://www.oracle.com/technology/software/products/warehouse/index.html
Barry
Message was edited by:
bpcook

Similar Messages

  • Is there any way to stop a process execution (all instances)

    Hello,
    I´d like to know if there is any way to stop a specific process for execution at the engine without need to undeploy it, since we don´t want to loose process instances when we need to start this process for execution latter on.
    We have a PRD environment with a lot of processes from different departments (developed by different teams and external suppliers) and a feature for stop a specific process and isolate the environment could be very good to do root cause analysis when issues occurs at the environment.
    Sometimes stop a specific process (or some of them) could help in issues investigation that causes the engine to malfunction (lot of audit enabled, some loop bad controlled, lot of concurrent access) but I could not see this option at the webconsole.
    In the version 5.7 one EAR was created separatedly for each process deployed and this could be done stopping the EAR created for that process. Anyone know how to do this at version 6?
    Thanks

    Well the bad news is you are right, there really isn't any way to do this in versions after 5.7
    Starting at 6.0 all projects are deployed under the 'engine ear'. So if you stop the engine, you stop all projects deployed.
    I'm a little concerned that you are first seeing these issues in a 'PRD' environment, is this something that you could set up in a DEV, or UAT, or SIT, or any other environment (That is built similarly) to recreate the issues? - Then undeploy any of the other projects... and isolate the problem...
    -Kevin

  • ITunes 9.0.3 plays music slow

    I recently upgraded my MBP from Tiger to Snow Leopard 10.6. Then I upgraded to the current security update. Then I upgraded to OS X 10.6.(3)...not certain how i was able to get this version from Apple. I had to upgrade the OS to accommodate my new wireless keyboard and magic mouse. This also prompted me to upgrade the software for these new wireless peripherals. Up until this point, my laptop was working well. THEN, i upgraded to iTunes 9.0.3. HUGE MISTAKE. iTunes began playing music slowly, music wasn't garbled, but the songs sounded out of tune (playing from my hard drive). Also, my wireless magic mouse got jumpy and exhibited delayed reaction. I ended up reinstalling OS X 10.6 (from the cd) over the existing OS (10.6.3), and now iTunes works just fine. The reinstall of OS X 10.6 from the cd did wipe out my software upgrades for the wireless peripherals and i have yet to upgrade them. i'm just glad iTunes is working well again. My questions are: Was the problem with iTunes playing slowly an iTunes issue? Or was the root problem from the upgrade to OS X 10.6.3, and therefor an OS problem? Why was my mouse also displaying problems? And how was I able to get OS X 10.6.3 from Apple several days ago when they don't have that version available for download on the site today? Confusing.

    Thanks for the the link =)
    Tho... I'm a former Apple Product Specialist (and now an engineer for VMware), so I know how to uninstall iTunes and isolate user-environment settings.
    I didn't feel like creating a new user to test, and I posted here to see if I was alone with the issue.
    I was able to resolve the issue in the end by restarting the computer actually. (go fig...)
    The visualizer also was 'stuck', where the various pieces of animation weren't moving, but smoke was billowing around... so I hit 'n' to remove the smoke, which sort of helped, 'f' which unpaused it, but it was still acting funny and sluggish.
    After restarting things came back as expected, visualizer no longer stuck of choppy. Not sure what caused it, but it appears to be an isolated incident, so we're good =)
    Thanks for checking in, Jason =)

  • Apache php mysql mac 10.4.8 client

    does anyone know the version of php mysql and apache on 10.4.8 if it's shipped with osx and how to upgrade them
      Mac OS X (10.4.8)  

    mysql is not shipped... unless youre talking about osx server then it may be.
    php is 4.4.1 and the path is /usr/bin/php and /usr/lib/php
    apache is 1.3.33 and the relevant paths are /usr/libexec/httpd and /usr/sbin
    I wouldnt reommmend upgrading any of them because its possible they could be broken by future system updates, although i dont specifically know of this ever happening. but for development purpouses its better to isolate your environment. Instead of upgrading the system versions i would recommend installing the versions you wish into /usr/local and jsut setting things up to run from there. Marc Lyiange's php packages do this for you by default (www.entropy.ch). MySQL also comes as a mac binary and installs into /usr/local. As for Apache that will require some configuration What i would do is install it into usr/local. Then i would rename /usr/sbin/httpd to /usr/sbin/httpd-1.3.33 and /usr/sbin/apachectl to /usr/sbin/apachectl-1.3.33. then make symlinks to /usr/local/apachedir/bin/apachectl and /usr/local/bin/httpd to /usr/sbin/httpd and /usr/sbin/apachectl.
    Of course you could always change the appropriate plists to just run your version of apache but i dont know what those are off hand. For what its worth unless you need a particular module thats not supported on 1.3.33 i wouldnt even mees with apache 2.0. its jsut not worth the effort in my opinion.

  • Settings...what is the secret

    i have had a heck of a time generating HD clips that will play in QT on my own computer without "staggering,"
    i can down load an HD movie trailer from apple (armored) that is bigger (1920x912 vs 1780x720), in millions and at 24 fps (mine is 30). the trailer plays, as far as i can tell, flawlessly. both in H.264.
    what is the secret? what are they doing that i am not?

    tom, that is what i think too, but darned if i can figure out what. spent about two hours on the phone with FCE product specialists at Apple that last two days, and they weren't ready to guarantee clean running. one suspected underperforming hard drives. what really distresses me is that i can see some of this staggering inside of FCE where it shouldn't happen.
    i took an exported clip to another machine at a local connecting point store and the staggering still happened, suggesting that it was embedded in the clip somehow. i can scrub the clip in QT using the right arrow key held down and all the frames appear to be there, but when running, every approximately four seconds, it staggers.
    i created a separate boot drive with a fresh install of leopard and put only my video card drivers and FCE on it. disconnected everything except the firewire to the camcorder and the USB for the keyboard to try to isolate the environment. it still happened. before i even put the video drivers in, i played the exported clip. every four seconds i got a horizontal distortion that moved from the bottom of the clip to the top. after i installed the video drivers, this became the stagger.
    tomorrow i am going to a retail store with my camcorder to capture the same clip there and see if there is any difference. i don't believe it is the camera as i can hook it up with HDMI to a HDTV and it plays fine. i suppose there could be a firewire problem in the camcorder.
    i am stumped and frustrated.
    Message was edited by: Forrest Jerome1

  • Confusing Performance Results--64-Bit and 32-Bit Geekbench Results

    I don't understand all of my results for the performance tests mentioned in my subject. Can anyone help me understand what is going on?
    I posted the results given below in an earlier thread. I don't think "we" had quite as good an understanding of the 32-bit kernel mode, the 64-bit kernel mode, and related topics back then. So, I am revisiting my results and looking for explanations. My results contradict some of the things we tend to say.
    I have both Geekbench 32-bit and 64-bit. So, I made four quick benchmark passes. The Geekbenck scores from my four passes are given below. Note that the Geekbench programs are Mac universal. That is, they are made to run on both Intels and PowerPCs and apparently having started up in either the 32-bit or the 64-bit kernel mode. In general, such programs are not designed to run as either 32-bit or 64-bit. Doing so is not the same thing as simply running having started up in either the 32-bit or the 64-bit kernel mode)
    64-bit SL/64-bit Geekbench--4,263
    64-bit SL/32-bit Geekbench--3,910
    32-bit SL/64-bit Geekbench--4,159
    32-bit SL/32-bit Geekbench--3,809
    These results suggest that:
    1. 64-bit programs are faster than 32-bit programs whether run in 64-bit or 32-bit mode (sort of understandable--Incremental RAM is available under 64-bit mode, but should the difference be this large?);
    2. There is a performance penalty for running 64-bit programs in 32-bit mode (not understandable--64 bit programs are supposed to run essentially the same whether run under the 32-bit kernel or the 64-bit kernel); and
    3. 32-bit programs run faster in 64-bit mode than in 32-bit mode (not understandable--32 bit programs are supposed to run essentially the same whether run under the 32-bit kernel or the 64-bit kernel).
    Message was edited by: donv (The Ghost)

    R C-R wrote:
    donv (The Ghost) wrote:
    There does appear to be sensitivity to the kernel mode environment. Thus, there is some chance that we are overstating the case for starting in 32-bit mode not making a performance difference.
    Who is "we" in this?
    I didn't take names, ranks, and serial numbers.
    but it isn't as simple as one kernel outperforming the other. There are (fairly) well known tradeoffs in using the 64-bit kernel ("K64" in Apple jargon), the most significant of which for most users being loss of 32-bit driver support. There are also some concerns about the stability of the combination of available 64-bit drivers (including Apple's) & K64 on some of the >Macs that Apple "artificially" excludes from booting with K64.
    Right, but these things are not pertinent to my tests. But, "we" also say those things.
    I don't think anybody understands it all (I certainly don't!) but that's the >point: there really is no "we" in this.
    As said, I didn't take names, ranks, and serial numbers of the "we" I referred to--those maintaining that there should not be performance differences of the types I have found and emphasized if 2 and 3 above.
    Regarding your GeekBench results in particular, I'm still not sure what you find contradictory about them. 64 bit apps are more efficient than 32 bit ones. The 64 bit CPU instructions available with K64 are much more efficient at some > >things than their 32 bit counterparts.
    OK, so then we should expect the sorts of performance differences I found, right? And, you agree that the performances differences I am emphasizing make sense, right? And, you would also agree that, barring need to use apps that won't run in 64-bit mode, those seeking maximum performance should start in 64-bit mode, right? You must not have been part of the original "we." But, now, you, I, and apparently others belong to a different "we"--the "we" that knows that such performance differences should be expected in the context of certain tasks (i.e., the important class of memory and CPU intensive tasks emphasized by Geekbench).
    Because GeekBench isolates CPU & memory performance, it is going to be very sensitive to this. But that doesn't mean anyone should expect to see similar results for a practical application or base the choice (if they have it) of >using K64 with their Macs on this or any other benchmark results.
    Right on the first sentence, but, importantly, my test isolates bit environment. Otherwise, can't we just stick with the issues raised in my post. I have never claimed we should expect or do the things you mentioned at this time. Further testing of apps and tasks, however, might imply the opposite. But, in general, you are battling mightily against a strawman that doesn't resemble me. I have acknowledged the issues you raise, in effect, as far back as my first post--not in your exact words, but at least implied.
    IOW, one size does not fit all.
    Why preach to the choir?
    Message was edited by: donv (The Ghost)

  • Best deployment practices?

    I've been working with LiveCycle about 4 weeks and now I'm so wondering what the best practices for deployment are.
    Normally, in any software development efforts, there are three different environments (Dev - Test - Production) or two (Dev - Production) at least. And each environment has configurations so the software runs smoothly within the environment it runs on.
    For example, communications between other machines (web services, database connections, etc...) are highly configurable because each environment has its own set of copies to isolate one environment from another. Unfortunately, I haven't found a way to achieve this with LiveCycle.
    Say I have a process that's fully tested under development environment and want to promote it to test environment. How can I change the database connections or web services connections for test environment? And say, I have a fully functioning PDF (including web services calls in development environment) in development environment and want to promote it to test environment. How can I promote it to the next environment easily (not by modifying the PDF to interact with test environment web services)? The only way I've learned is to modify the PDF to talk to the other machines in test environment which is not feasible if you have to handle a few hundreds of PDF files and each one has tens of web services calls - Well, even if it's only one file, it shouldn't have to be this way.
    I think this is very very common scenario but I haven't found any documents or help about this. And it seems that LiveCycle is not designed to cover this common development scenario.
    Maybe I'm struggling because I do not know LiveCycle very well. But still it shouldn't be hard to figure this out.
    So, what is the best deployment practices in LiveCycle?
    Any comments or workarounds are highly appreciated.

    Look into using Configuration Parameters - essentially variables that can be set when you import an LCA from one system to another.
    http://livedocs.adobe.com/livecycle/8.2/wb_help/001237.html

  • Running OMBPlus and EXP/IMP in mixed version environment

    OWB Mixed Environment Guru's
    Current environment:
    OWB Client: 10.1.0.2.0 on Windows XP Professional
    OWB Server side: 10.1.0.2.0 on UNIX (AIX 5.2)
    Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    UNIX Listener: 9.2.0.4 on UNIX (AIX 5.2)
    Runtime Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    I call this a mixed environment since my OWB stuff is 10g and my database stuff is 9.2.
    Issues:
    1- I can't get the command line exp.sh script to connect to the repository and returns the famous 'ORA-12154, TNS:listener does not currently know of service requested in connect descriptor'. It looks like the 'owbsetenv.sh' script is changing the value of $ORACLE_HOME to point to the 10g areas. Could that be then causing the system to look for a 10g LISTENER which doesn't exist since all my databases are 9.2.0.4???
    2- I have the same issue trying to run OMBPlus.sh.
    I am ultimately trying to set up a promotion process using the UNIX command line programs (exp/imp and OMBPlus) to get objects from the TEST environment into the PRODUCTION environment which is a separate repository and target schema on a different machine.
    Any advice on how to successfully operate in this 'mixed' environment is most welcomed.
    Many thanks!
    Gary

    Well it looks like I did it again!
    Total brain fart.
    The problem turned out that I wasn't specifying the entire SERVICE_NAME for the repository database. I had been leaving off the domain information. Must be a habit from not having to use it in the TNSNAMES.ORA files.
    I was able to compelte my test export and connect to OMBPLUS and will now try my test import.
    Sorry to clutter the forum but if it helps anyone else with the same affliction I seem to have frequently, I guess that's a small reward.
    Until next time.
    Gary

  • Environment in OMBPlus

    Does anybody know how i can use environment variables or command line parameters in OMBPlus if i run a script?
    Thanks in advance

    You can modify the ombinit.tcl file so that OMBPlus automatically loads a file with custom procs. Just put "source <your tcl proc file>" at the bottom of the file.
    For instance, I created these "connect" and "disconnect" procs to make it easier to move between repositories.
    proc connect {repository} {
      set username <your username>
      set password <your password>
      set hostname <your host>
      set tnsname <your db>
      set port 1521
      set repository [string toupper ${repository}]
      OMBCONNECT $username/$password@$hostname:$port:$tnsname USE REPOS '$repository'
      global CURRENT_REPOSITORY
      set CURRENT_REPOSITORY $repository
      puts "Connected to ${repository}."
    proc disconnect {} {
      OMBDISCONNECT
      global CURRENT_REPOSITORY
      puts "Disconnected from ${CURRENT_REPOSITORY}."
      unset CURRENT_REPOSITORY
    }Now, when I connect, it sets up a global variable called $CURRENT_REPOSITORY that I use in other procs (mainly when creating a snapshot or exporting to MDL).
    The scripting SDK is a good place to go for more help....
    http://www.oracle.com/technology/products/warehouse/sdk/Scripting%20Pages/Scripting_home.htm

  • Accessing shell environment variables inside OMBPlus script

    Hi,
    I have a problem accessing an environment variable inside OMBPlus script. I can 'see' the env. var values when using tclsh, but when I use OMBPlus.sh it doesn't work?
    Anybody has an example of how to do this?
    For example: puts $::env(MYVAR)
    Thanks,
    Ed

    Hello!
    Issue the command:
    puts [array names env];
    this will print you the content of the env array which is definitely not the environment variables.
    If you want your environment variable appear in the env tcl variable then you must edit the ombplus.bat file and pass it through a java option like this:
    -DMYENV="%MYENV%" (on windows assuming you declared MYENV).
    Now puts $env(MYENV) will work.
    Regards,
    Robert

  • OMB PLUS - Problem passing Unix environment variables to OMBPlus

    Due to a requirement to encapsulate deployment of OWB applications, we currently start OMBPlus.sh from our own wrapper ksh script (deploy.ksh) in order to get the new / changed application into the target control center etc.
    We now have a new requirement that means we need to pass the content of the Unix environment across to OMBPlus.sh (and from thence into our deployment tcl scripts).
    No problem, if you believe the tcl documentation. The entire Unix environement gets dumped into a hash array called 'env', so you can get the variable's value out just by saying $env(unix_valraible).
    Sounds great, it should work a treat.
    Except OMBPlus only silghtly resembles tclsh.
    The 'env' that gets into OMBPlus bears practically no resemblance to the 'env' that existed before OMBPlus.sh got invoked.
    Does anyone have:
    a decent explanation for why the env gets scrambled (and how to avoid it) ?
    or an alternative method of getting the Unix environment varaible values into OMBPlus ?
    Please do not propose passing them all on the command line because (would you beleive it) the values are database passwords !
    Edited by: user10466244 on 23.10.2008 09:28

    Unfortunately, the java implementation of TCL that Oracle used as the basis for OMB+ is NOT a fully-featured implementation. Just try using packages...
    However, and understanding why you don't want to hard-code passwords into a file, you can always edit the setowbenv.sh file in your owb/bin/unix directory to grab your specific shell environment variables and propogate them to the java session.
    towards the bottom of this env file you will see a section that looks something like:
    JDK_HOME=../../../jdk
    OWB_HOME=/owb
    ORA_HOME=/owb
    OEM_HOME=/owb
    IAS_HOME=/owb
    ORACLE_HOME=/owb
    CLASSPATH=Personalties.jar:../admin:$MIMB_JAR:
    CLASSPATH_LAUNCHER="-classpath ../admin:../admin/launcher.jar:$CLASSPATH: -DOWB_HOME=$OWB_HOME -DJDK_HOME=$JDK_HOME -DORA_HOME=$ORA_HOME -DOEM_HOME=$OEM_HOME -DIAS_HOME=$IAS_HOME -Doracle.net.tns_admin=$ORA_HOME/network/admin Launcher ../admin/owb.classpath"
    export ORA_HOME
    export OWB_HOME
    export JDK_HOME
    export OEM_HOME
    export IAS_HOME
    export ORACLE_HOME
    You could add in the environment variables that you want propogated, include them into the CLASSPATH_LAUNCHER, and then they will turn up in your OMB+ session env array.
    e.g., to propgate an environment variable called MY_DATABASE_PASSWORD you would:
    JDK_HOME=../../../jdk
    OWB_HOME=/owb
    ORA_HOME=/owb
    OEM_HOME=/owb
    IAS_HOME=/owb
    ORACLE_HOME=/owb
    CLASSPATH=Personalties.jar:../admin:$MIMB_JAR:
    CLASSPATH_LAUNCHER="-classpath ../admin:../admin/launcher.jar:$CLASSPATH: -DOWB_HOME=$OWB_HOME -DMY_DATABASE_PASSWORD=${MY_DATABASE_PASSWORD} -DJDK_HOME=$JDK_HOME -DORA_HOME=$ORA_HOME -DOEM_HOME=$OEM_HOME -DIAS_HOME=$IAS_HOME -Doracle.net.tns_admin=$ORA_HOME/network/admin Launcher ../admin/owb.classpath"
    export ORA_HOME
    export OWB_HOME
    export JDK_HOME
    export OEM_HOME
    export IAS_HOME
    export ORACLE_HOME
    So now you have no protected data hardcoded, it will pick up your specific environment variables at runtime, and when you start OMB+ you will be able to:
    array get env MY_DATABASE_PASSWORD.
    cheers,
    Mike

  • Printing issue in a 2-node clustered environment

    Hi,
    We have 2-node Apps Tiers (PCP enabled) with print queues running on each node. When users run a job to print a job using the same program; the print queue to the printer gets messed up.
    For example: user 1 prints 10 items to printer A and user 2 prints the same set of 10 items but with a different numbers. The final outputs on the printer goes out of sequence -- 20 items printed with a combination of items from user1 and 10 items from user2.
    The question is, in a PCP environment, is there a way to isolate the print job ..ie to allow the print job to complete on that printer and block it for other users/print jobs?
    Environment: 11.5.10.1, 10.2.0.3, RedHat 4.x, 2-node PCP enabled apps tiers.
    Thanks,
    Subroto

    Hi,
    The question is, in a PCP environment, is there a way to isolate the print job ..ie to allow the print job to complete on that printer and block it for other users/print jobs?I do not think can be controlled from the application (unless you use incompatibilities -- See the link below for details).
    http://forums.oracle.com/forums/search.jspa?threadID=&q=Concurrent+AND+Incompatibility&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    You may also check with your printer vendor and see if you can control this at the Printer/OS level.
    Thanks,
    Hussein

  • Install of HA MaxDB in a Sun Cluster 3.2 environment

    Hi. I am installing SAP Content Server using MaxDB in a Sun Cluster 3.2 environment. According to the SUN doc, I must install the X_server as a scalable resource/service.  For a scalable resource, I have to make the disk available on both nodes of the cluster. I need to identify what are all the executables for the x_server so that I can make those mount points a NFS share.  From looking at the content of previous non-clustered installs, I am thinking I need to make the directories /sapdb/data and /sapdb/program NFS to isolate the x_server components.  Does anyone have experience with this setup ? I've placed a log with OSS but wanted to see if anyone else in the forum has done this before. Thanks for any input.

    Hello Dan,
    Please review the SAP Note No. 803452.
    X_server tool is the the part of the indepentent database software installation <independent_program_path>. Using NFS you could  set up the <independent_program_path> and <independent_data_path>  directories as share directories.
    Thank you and best regards, Natalia Khlopina

  • SAP Licensing for a Test Environment

    One of our clients has created a test environment physically seperate from the Production environment (That is on two seperate machines). All their purchased SAP B1 licenses are being used in the production system. If you select additional licenses, they are only temporary and expire in 14 days from creation.
    How do you get valid licenses into the Test Environment that will always be active without having to reduce the licenses that are allocated to the Production system?

    Thank you all for your helpful answers.
    I understand about creating a test database in the Production system and it use of the same License Manager
    However this does not work if you are trying to test patches or major release upgrades because once you apply the upgrade / patch to the test database in the production environment the code base is now changed and the production database would then be updated.
    In order to isolate the test of upgrades, patch levels, major functionality etc., you need to build a separate environment on different hardware to do the testing. That is what we have done.
    Since the license server is not shared the separate test environment needs to have a license file that reflects its hardware key. 
    When I go to create that license file and I specify "Test System" for the system, the license file is only created with temporary licenses which inactivate in 14 days.
    I need to know if there is a way to create the license file without the licenses being inactivated and without reducing the number of licenses available to the Production System.

  • OWB/OMBPLUS Set a variable externally.

    Hello everyone,
    Following scenario:
    I have created an expert in OWB. This expert does some action with a mapping and publishes the output to a pdf-file. To publish it, I use an executable somewhere in my filesystem.
    Now I want to create an installer for this expert. I can use OMBIMPORT to insert the expert to my owb-workspace, but I also need an link to the executable.
    Two options I considered:
    1 ^st^ : Set a windows environment variable during installation and access it with TCL. This doesn't seem to work because OMBPLUS is started in a Java VM and I can't access the environment variables.
    2 ^nd^ : Have a variable in my expert ("my_path"). But how do I set this variable with OMBPLUS-commands? I didn't find a proper command for it yet.
    Do you see a solution for my two approaches? Or maybe there is another solution I don't see yet.
    Thanks in advance!

    Okay, I came a step forward.
    I found and took a deeper look into OMBALTER EXPERT.
    I tried now:
    OMBALTER EXPERT "MY_EXPERT" ADD VARIABLE "MY_PATH" SET PROPERTIES ( VALUE ) VALUES ( "C:\my path\to\executable" )
    It doesn't show an error but somehow it doesn't work also.
    Hopefully I read the BNF properly..
    Does anyone of you have experience using OMBALTER EXPERT? Would be great!

Maybe you are looking for