NetMask change at VSA environment

Hello all,
I'm planning to change netmask on my VSA environment, since i ll only change netmask, and I won't make any change on IP address, the change should work transparent as I research, to make this i m planning to make the changes at maintenance mode and then restart the hosts after.
Should I take care about anything else?
By the way, does anyone knows where can I download a trial for vSphere Storage Appliance? is that possible ? I m unable of that at the momento.
Thanks  and regards!

Hello,
As per best practices, any configuration change should go through transport process and all master data changes should be done directly in production.
Some examples of changes that can be done directly are.
1. Exchange rates
2. Sets used in validation/substitution (GS01/GS02)
3. All condition records
4. Customer/vendor/GL/Asset/cost center/cost element master data
5. Tax percentages
There are many more follows... Tell us your exact requirement. Then we can guide you better.
Thanks,
V V

Similar Messages

  • Advice on Promoting Master Data Services Changes from one environment to another, e.g. DEV to UAT

    Hi,
    Has anyone got experience of creating a script to promote Master Data Services changes from one environment to another, e.g. Development to Production please?
    The changes basically consist of adding several new MDS members which can be accessed via Excel.
    Thanks in advance,
    Kind Regards,
    Kieran.
    Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

    In MDS terminology an Entity is metadata, and Entity Members are the data. 
    Typically changes to an Entity (ie model design changes) are promoted to different environments.  Model design changes can be replicated manually in each environment or you can Export the model and Import it into the target environment.
    Adding, editing and deleting entity members is performed directly in each environment.
    If you want to selectively import data from a Dev environment, you can use staging.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • How to publish infopath list form changes from one environment to another

    How can we move InfoPath customized list form changes from one environment to another example Development environment to UAT environment? We can save that as source file and update url on the .xsn file. But is there any other way we can do it?
    Rajasekar A.C

    Hi,
    You can save the list as template in source environment. Download the template for List Templates, upload in the same location in Destination site. The customized infopath should go with it. Let us know if that doesnt work.
    Regards, Kapil ***Please mark answer as Helpful or Answered after consideration***

  • Any change on "cgi-environment-list" causes 503 (11g XE Beta, EPG, CentOS)

    Hello APEX developers!
    I ran into the following problem, maybe you have a hint for me:
    When I try to configure the cgi-environment-list on our main-system (11g XE Beta, EPG, CentOS) I always receive a 503 (HTTP Server temporarily unavailable) on all clients (except via telnet on the XE server itself).
    Funny: On my test-system I can change the cgi-environment-list to whatever I want - it never causes a total 503. The test-system runs smoothly, even if I write "foobar" into the cgi-environment-list (well, charts&help stop working of course).
    With the test-system running, I thought I determined the correct settings for an Apache 2.2 SSL proxy (including proper charts&help).
    I already checked SELinux and iptables on the XE server, no problem there. However, it appears that APEX stops listening on the 8080 via network. The apache settings of main-system and test-system are identical (except ServerName/IP of course).
    Does anybody have a hint?
    Greetings from Berlin
    -Enrico

    Hello APEX developers!
    I ran into the following problem, maybe you have a hint for me:
    When I try to configure the cgi-environment-list on our main-system (11g XE Beta, EPG, CentOS) I always receive a 503 (HTTP Server temporarily unavailable) on all clients (except via telnet on the XE server itself).
    Funny: On my test-system I can change the cgi-environment-list to whatever I want - it never causes a total 503. The test-system runs smoothly, even if I write "foobar" into the cgi-environment-list (well, charts&help stop working of course).
    With the test-system running, I thought I determined the correct settings for an Apache 2.2 SSL proxy (including proper charts&help).
    I already checked SELinux and iptables on the XE server, no problem there. However, it appears that APEX stops listening on the 8080 via network. The apache settings of main-system and test-system are identical (except ServerName/IP of course).
    Does anybody have a hint?
    Greetings from Berlin
    -Enrico

  • Netmask Changes in Solaris 10

    Hi Team,
    We tried to change the netmask and broadcast address in solaris 10 by below steps
    ifconfig ipge0 down
    ifconfig ipge0 <ipaddress> netmask <subnet mask> broadcast <address
    ifconfig ipge0 up
    But interface is not up. The command (ifconfig ipge0 up) is hanged for long time without giving any error.
    So, we changed the new netmask in the /etc/inet/netmasks and rebooted the server. The new changes got affect only after reboot.
    Could you please confirm the Solaris 10 requres reboot for changes in netmask.
    Regards,
    R. Rajesh Kannan.

    No, it doesn't need a reboot.
    In fact, you don't even have to take the interface down....
    I would edit /etc/netmasks so that the correct setting was in there for the next time I booted. Then:
    'ifconfig <interface> netmask + broadcast +' (Yes, with the '+' symbols).
    That should do it. No need to down or up.
    Darren

  • Change in Development environment brings Production down !!!!!!!!!

    Hello Everyone,
    I am stuck here in a strange issue. Our planning application went live about a month ago. Now we are trying to migrate the Essbase reporting application to Production, but we are unable to do so for following reasons -
    1.As soon as we try to expand the user list in development EAS, production essbase crashes. Doing the same thing in Production EAS makes essbase crash. It generates a xcp file. We contacted oracle support, they advised a patch which fixed this issue.
    2. But here's another problem, now when we are trying to migrate an application from dev to prod, prod essbase crashes all the time.
    We are using the same active directory for users in dev and prod. Is that a part of the problem?
    Some days ago, one of the admin changed the admin password in shared services, but didnt change it in planning (RDBMS and Essbase Connection). It broke the planning forms. So we restored the older password again and all the applications and forms are working in Dev. But does that ring any bells?
    Oracle Support is slow and isnt of much help. Please help me out if you faced the same issue and were able to resolve it.

    I am not sure where this repository is. Is there a way to check if it was cloned.^^^Based on your original post, I am guessing this is an 11.1.2.x install. Is that correct? 11.1.2.0 or 11.1.2.1? Or even 11.1.1.3? If so, did you use Lifecycle Management (LCM) to migrate the application from development to production? Or did you use some other approach?
    Or did you use EAS' Migration Wizard?
    Pre LCM, migration was a nightmare for anything other than Essbase with databases/schemas being migrated across environments, hacks to the migrated tables, etc., etc. In Planning there was a CopyApp that I personally never had a lot of luck with. I just rebuilt the apps in the new environment -- not exactly a migration, but it worked. Hopefully today no one would do anything like this -- I have migrated Essbase and Planning apps across environments without issue (it needs to be sequenced in Planning, but is otherwise easy). Hopefully no one would use the migrate schema approach today.
    Do you know if LCM was used? Or even the olde-fashioned Migration Wizard? Were it my app I'd have all of the security assignments tied up in MaxL statements that I could easily port from one environment to the other, but that's just me.
    Regards,
    Cameron Lackpour
    P.S. I now fully expect John Goodwin/someone else smarter than me to jump in and tell me where I'm wrong on the schema migration and how it actually isn't all that bad but really, for us mere mortals, LCM does everything we need. :)

  • Directly change in production environment.....

    Hi Experts,
         Hope you are well.
         Could you please share that which changes can we make directly in production environment at the configuration level, without create transport request.
         Please share with me the changes list in production environment.
    Thanking for your understanding.
    Thanks & Regards
    Rajesh

    Hello,
    As per best practices, any configuration change should go through transport process and all master data changes should be done directly in production.
    Some examples of changes that can be done directly are.
    1. Exchange rates
    2. Sets used in validation/substitution (GS01/GS02)
    3. All condition records
    4. Customer/vendor/GL/Asset/cost center/cost element master data
    5. Tax percentages
    There are many more follows... Tell us your exact requirement. Then we can guide you better.
    Thanks,
    V V

  • Integrate new MA without changing the existing environment

    Hi,
    Can any one please suggest me for the below scenario.
    Consider in the current environment. , we have deployed FIM sync server  and configured three management agent of type ADDS. There are some rule extension deployed for exchange and Lync. FIM Portal and MS Bhold is not installed and used. Total
    number of user are 15k.
    Now we have new requirement to integrate a new  AD and  new  Ldap management agent to provision and  provision  500 user.
    For these new 500 users, we also need FIM Portal and MS Bhold.
    We don't want to make any  change in the existing  configured MA. Can we implement like this.
    1. Install FIM Portal and MS bhold on the  existing server on which FIM syn server is already installed.
    2. Configured the new management  for new AD and Ldap.
    3. Configured the FIMMA to allow only the new 500 user to provision in FIMService based on some criteria  which can get from the new AD and Ldap.
    4. Do the required configuration of MPR,Syn rule etc in FIM Portal.
    5 .Do the configuration  export and import the user in Bhold  for only new 500 users to approval, groups management etc.
    OR
    Install the FIM Portal and Bhold and then reconfigure the all exiting management agent using  the FIM Portal GUI.
    Please suggest which is best way. If we need to reconfigure the all the existing MA using FIM Portal or just install the FIM Portal and Bhold and configure the  required MA for new 500 users only.
    Thanks
    Harry

    Thanks Dominik for your response,
    So it means there will not be any performance issue if we keeps existing MA in Synchronization server (without FIM Portal) and install the FIM portal and Bhold to manage and create new AD and LDAP MA for new 500 users using FIM Portal GUI.
    As well as per my understanding, only  500 cals  Licence will be required because only new 500 users will be managed by FIM Portal (FIMService) and MS Bhold. Cals license will not be required for rest old 15K user  because these
    will be managed by only by Synchronization server(FIMSynchronizationService ) not FIMSevice.
    Please correct me.
    Thanks
    Harry

  • How can I change the Xcelsius environment language??

    Hi, I downloaded the Xcelsius Engage 2008 trial version and it is installed in spanish, how can I change it to english?
    P.D.  I also installed the English Language Pack
    Thnx
    Edited by: Bere11y12 on Aug 12, 2010 9:05 AM

    Hola,
    I guess you'll find the relevant property in the file menu...
    To be precise:
    Menu "File" --> "Preferences..." --> "Languages" --> "Current Language"
    I know that you currently only see Spanish expressions...
    I do not talk Spanish but my dictionary says:
    Menu "Fichero" --> "Configuración..." --> "Lengua" --> "Actual Lengua"
    Hope this helps!
    Micha
    P.S.: A restart is needed after changing to the right language... this popup will looks like this: "El programa necesita reiniciarse para cambiar el idioma de los productos. ¿Quieres salir?"

  • How it is possible to reflect workbench changes on clustered environment

    Hi All,
    I am running endeca on MachineA and MachineB with separate MDEX engine.
    Cluster dgraph is implemented on MachineA with MachineB, thus data will updated on machineB when i run baseline update on MachineA.
    I have installed Experience Manager on MachineA and created some pages using Workbench.
    The rules are getting fired for MachineA without baseline update, but i noticed that same are not working for MachineB even though both machines are in cluster.
    When i run baseline update on MachineA rules are working with MachineB.
    How it is possible to reflect workbench changes on both clustered MDEX engine without running baseline update.
    Please share your suggestion.
    Thanks in Advance,
    SunilN

    Hi Guys,
    I have tried to both approaches which you have suggested me.
    But still the rules are not fired for MachineB.
    I have tested it endeca_jspref on MachineB it the rules are not getting reflected for MachineB.
    Below is my MachineA AppConfig.xml file :
    <?xml version="1.0" encoding="UTF-8"?>
    <!--
    # This file contains settings for an EAC application.
    -->
    <spr:beans xmlns:spr="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns="http://www.endeca.com/schema/eacToolkit"
    xsi:schemaLocation="
    http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
    http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd
    http://www.endeca.com/schema/eacToolkit http://www.endeca.com/schema/eacToolkit/eacToolkit.xsd">
    <app appName="WineStore" eacHost="MachineA" eacPort="8888"
    dataPrefix="WineStore" sslEnabled="false" lockManager="LockManager">
    <working-dir>${ENDECA_PROJECT_DIR}</working-dir>
    <log-dir>./logs</log-dir>
    </app>
    <host id="ITLHost" hostName="MachineA" port="8888" />
    <host id="MDEXHost" hostName="MachineA" port="8888" />
    <host id="MDEXHost2" hostName="MachineB" port="8888" />
    <host id="webstudio" hostName="MachineA" port="8888" >
    <directories>
    <directory name="webstudio-report-dir">./reports</directory>
    </directories>
    </host>
    <lock-manager id="LockManager" releaseLocksOnFailure="true" />
    <script id="InitialSetup">
    <bean-shell-script>
    <![CDATA[
        if (ConfigManager.isWebStudioEnabled()) {
          log.info("Updating Oracle Endeca Workbench configuration...");
          ConfigManager.updateWsConfig();
          log.info("Finished updating Oracle Endeca Workbench.");
    ]]>
    </bean-shell-script>
    </script>
    <script id="BaselineUpdate">
    <log-dir>./logs/provisioned_scripts</log-dir>
    <provisioned-script-command>./control/baseline_update.bat</provisioned-script-command>
    <bean-shell-script>
    <![CDATA[
        log.info("Starting baseline update script.");
        // obtain lock
        if (LockManager.acquireLock("update_lock")) {
          // test if data is ready for processing
          if (Forge.isDataReady()) {
            if (ConfigManager.isWebStudioEnabled()) {
              // get Web Studio config, merge with Dev Studio config
              ConfigManager.downloadWsConfig();
              ConfigManager.fetchMergedConfig();
            } else {
    ConfigManager.fetchDsConfig();
    // clean directories
    Forge.cleanDirs();
    PartialForge.cleanCumulativePartials();
    Dgidx.cleanDirs();
    // fetch extracted data files to forge input
    Forge.getIncomingData();
    LockManager.removeFlag("baseline_data_ready");
    // fetch config files to forge input
    Forge.getConfig();
    // archive logs and run ITL
    Forge.archiveLogDir();
    Forge.run();
    Dgidx.archiveLogDir();
    Dgidx.run();
    // distributed index, update Dgraphs
    DistributeIndexAndApply.run();
    // if Web Studio is integrated, update Web Studio with latest
    // dimension values
    if (ConfigManager.isWebStudioEnabled()) {
    ConfigManager.cleanDirs();
    Forge.getPostForgeDimensions();
    ConfigManager.updateWsDimensions();
    // archive state files, index
    Forge.archiveState();
    Dgidx.archiveIndex();
    // (start or) cycle the LogServer
    LogServer.cycle();
    } else {
    log.warning("Baseline data not ready for processing.");
    // release lock
    LockManager.releaseLock("update_lock");
    log.info("Baseline update script finished.");
    } else {
    log.warning("Failed to obtain lock.");
    ]]>
    </bean-shell-script>
    </script>
    <script id="DistributeIndexAndApply">
    <bean-shell-script>
    <![CDATA[
        DgraphCluster.cleanDirs();
        DgraphCluster.copyIndexToDgraphServers();
        DgraphCluster.applyIndex();
          ]]>
    </bean-shell-script>
    </script>
    <script id="LoadXQueryModules">
    <bean-shell-script>
    <![CDATA[
        DgraphCluster.cleanLocalXQueryDirs();
        DgraphCluster.copyXQueryToDgraphServers();
        DgraphCluster.reloadXqueryModules();
          ]]>
    </bean-shell-script>
    </script>
    <script id="ConfigUpdate">
    <log-dir>./logs/provisioned_scripts</log-dir>
    <provisioned-script-command>./control/runcommand.bat ConfigUpdate run</provisioned-script-command>
    <bean-shell-script>
    <![CDATA[
        log.info("Starting dgraph config update script.");
        if (ConfigManager.isWebStudioEnabled()) {
          ConfigManager.downloadWsDgraphConfig();
          DgraphCluster.cleanLocalDgraphConfigDirs();
          DgraphCluster.copyDgraphConfigToDgraphServers();
          DgraphCluster.applyConfigUpdate();
        } else {
    log.warning("Web Studio integration is disabled. No action will be taken.");
    log.info("Finished updating dgraph config.");
    ]]>
    </bean-shell-script>
    </script>
    <custom-component id="ConfigManager" host-id="ITLHost" class="com.endeca.soleng.eac.toolkit.component.ConfigManagerComponent">
    <properties>
    <property name="webStudioEnabled" value="true" />
    <property name="webStudioHost" value="MachineA" />
    <property name="webStudioPort" value="8006" />
    <property name="webStudioMaintainedFile1" value="thesaurus.xml" />
    <property name="webStudioMaintainedFile2" value="merch_rule_group_default.xml" />
    <property name="webStudioMaintainedFile3" value="merch_rule_group_default_redirects.xml" />
         <property name="webStudioMaintainedFile4" value="merch_rule_group_MobilePages.xml"/>
         <property name="webStudioMaintainedFile5" value="merch_rule_group_NavigationPages.xml"/>
         <property name="webStudioMaintainedFile6" value="merch_rule_group_SearchPages.xml"/>
    </properties>
    <directories>
    <directory name="devStudioConfigDir">./config/pipeline</directory>
    <directory name="webStudioConfigDir">./data/web_studio/config</directory>
    <directory name="webStudioDgraphConfigDir">./data/web_studio/dgraph_config</directory>
    <directory name="mergedConfigDir">./data/complete_index_config</directory>
    <directory name="webStudioTempDir">./data/web_studio/temp</directory>
    </directories>
    </custom-component>
    <forge id="Forge" host-id="ITLHost">
    <properties>
    <property name="numStateBackups" value="10" />
    <property name="numLogBackups" value="10" />
    </properties>
    <directories>
    <directory name="incomingDataDir">./data/incoming</directory>
    <directory name="configDir">./data/complete_index_config</directory>
    <directory name="wsTempDir">./data/web_studio/temp</directory>
    </directories>
    <args>
    <arg>-vw</arg>
    </args>
    <log-dir>./logs/forges/Forge</log-dir>
    <input-dir>./data/processing</input-dir>
    <output-dir>./data/forge_output</output-dir>
    <state-dir>./data/state</state-dir>
    <temp-dir>./data/temp</temp-dir>
    <num-partitions>1</num-partitions>
    <pipeline-file>./data/processing/pipeline.epx</pipeline-file>
    </forge>
    <dgidx id="Dgidx" host-id="ITLHost">
    <properties>
    <property name="numLogBackups" value="10" />
    <property name="numIndexBackups" value="3" />
    </properties>
    <args>
    <arg>-v</arg>
    </args>
    <log-dir>./logs/dgidxs/Dgidx</log-dir>
    <input-dir>./data/forge_output</input-dir>
    <output-dir>./data/dgidx_output</output-dir>
    <temp-dir>./data/temp</temp-dir>
    <run-aspell>true</run-aspell>
    </dgidx>
    <dgraph-cluster id="DgraphCluster" getDataInParallel="true">
    <dgraph ref="Dgraph1" />
    <dgraph ref="Dgraph2" />
         <dgraph ref="Dgraph3" />
    </dgraph-cluster>
    <dgraph-defaults>
    <properties>
    <property name="srcIndexDir" value="./data/dgidx_output" />
    <property name="srcIndexHostId" value="ITLHost" />
    <property name="srcPartialsDir" value="./data/partials/forge_output" />
    <property name="srcPartialsHostId" value="ITLHost" />
    <property name="srcCumulativePartialsDir" value="./data/partials/cumulative_partials" />
    <property name="srcCumulativePartialsHostId" value="ITLHost" />
    <property name="srcDgraphConfigDir" value="./data/web_studio/dgraph_config" />
    <property name="srcDgraphConfigHostId" value="ITLHost" />
    <property name="srcXQueryHostId" value="ITLHost" />
    <property name="srcXQueryDir" value="./config/lib/xquery" />
    <property name="numLogBackups" value="10" />
    <property name="shutdownTimeout" value="30" />
    <property name="numIdleSecondsAfterStop" value="0" />
    </properties>
    <directories>
    <directory name="localIndexDir">./data/dgraphs/local_dgraph_input</directory>
    <directory name="localCumulativePartialsDir">./data/dgraphs/local_cumulative_partials</directory>
    <directory name="localDgraphConfigDir">./data/dgraphs/local_dgraph_config</directory>
    <directory name="localXQueryDir">./data/dgraphs/local_xquery</directory>
    </directories>
    <args>
    <arg>--threads</arg>
    <arg>2</arg>
    <arg>--spl</arg>
    <arg>--dym</arg>
    <arg>--xquery_path</arg>
    <arg>./data/dgraphs/local_xquery</arg>
    </args>
    <startup-timeout>120</startup-timeout>
    </dgraph-defaults>
    <dgraph id="Dgraph1" host-id="MDEXHost" port="15000">
    <properties>
    <property name="restartGroup" value="A" />
    <property name="updateGroup" value="a" />
    </properties>
    <log-dir>./logs/dgraphs/Dgraph1</log-dir>
    <input-dir>./data/dgraphs/Dgraph1/dgraph_input</input-dir>
    <update-dir>./data/dgraphs/Dgraph1/dgraph_input/updates</update-dir>
    </dgraph>
    <dgraph id="Dgraph2" host-id="MDEXHost" port="15001">
    <properties>
    <property name="restartGroup" value="B" />
    <property name="updateGroup" value="a" />
    </properties>
    <log-dir>./logs/dgraphs/Dgraph2</log-dir>
    <input-dir>./data/dgraphs/Dgraph2/dgraph_input</input-dir>
    <update-dir>./data/dgraphs/Dgraph2/dgraph_input/updates</update-dir>
    </dgraph>
    <dgraph id="Dgraph3" host-id="MDEXHost2" port="15000">
    <properties>
    <property name="restartGroup" value="B" />
    <property name="updateGroup" value="a" />
    </properties>
    <log-dir>./logs/dgraphs/Dgraph3</log-dir>
    <input-dir>./data/dgraphs/Dgraph3/dgraph_input</input-dir>
    <update-dir>./data/dgraphs/Dgraph3/dgraph_input/updates</update-dir>
    </dgraph>
    </spr:beans>
    Do i need to change any things else.
    Please suggest me.
    Thanks
    SunilN

  • User Master Data Changes in CUA Environment

    Hello,
    when a new user gets created.we have requirement for Indian system to set the decimal notation in user master data by default to 1,234,567.89 ,instead users doing it manually using SU3.We are using workflow tool for user creation.Globally our systems are connected to CUA system.User gets created in CUA system and then master data is distributed through IDOC to child systems.
    Now by defalut it the decimal notation is set to 1.234.567,89.

    Dear Sagar,
    Try to see which tables are changing when you chage that field with help of ABAP and then incorporate the same logic in your workflow for India User. I think there will be a need to add one more field in your workflow and should be done.
    But make sure you put some logic in your workflow if this India user then add this new field.
    Regards
    Shailesh Mamidwar

  • Advice on steps to take when changing from BES environment to BIS environment?

    I have seen posts on switch to the BIS environment but I am leaving a company with BES and would like to convert my BB to use with BIS.. I think there are BES rules, etc that I would like to get a clean slate on but wish to keep my address book, calendar, etc.  Also, the company had backup over the network versus using Desktop Manager.. I probably need to go back to that.. is there anything I should know?

    well if you wanna retain your calendar and contacts, backup your device. Typically you will wipe it to clean it from previous BES, but it wont conflict with BIS, so you can use with BIS without doing anything. Now you have to create a BIS account and then add your personal or compnay account
    Click on KUDOS to appreciate our efforts and mark the thread RESOLVED if your issue is resolved.

  • Changing UNIX environment variables?

    Is there an easy way to change the environment from inside Java-code?
    I have tried using the compile version and this works fine.
    Compile version:
    java -DmyVar="$PATH" myClass
    and then used System.setProperty("myVar", "/home/lala");
    The above works fine and changes the PATH variable in UNIX, but i don�t want to set these things at compiletime, is there another way?
    Thank you for any hints
    // adde

    I want to change the systems environment variable when I run my Java application, what happens when I close the program does not concern me. And I don�t just want to fetch what�s currently there, I want to change them and then use them in my program.
    But can you somehow do this without telling the program what variables you are interested in at compile time?
    Thank you for your answers

  • [SOLVED] Change environment variables with Shell scripts

    How can I change the "BROWSER" environment variable with a shell script; so I can change it on the fly?
    Last edited by oldtimeyjunk (2012-10-31 12:57:42)

    If you just want to do it for BROWSER so that you can change your default web browser on the fly, you could set BROWSER to e.g. ~/bin/mybrowser and create a symlink to the browser you want at ~/bin/mybrowser. Then you could change the symlink at will.
    EDIT: man xdg-settings
    Last edited by cfr (2012-10-31 02:20:16)

  • How to change the connection for a recurring report

    I have a group of reports running in production on recurring schedules.  I need to update the database connection for these as we are changing our server environment.
    I open the report in Crystal Reports 11 and changed the database connection saved the report back to the Enterprise Server location (I also have a backup of the original).  When I preview the report using the Central Management Console, the report uses the updated database connection info.  When I test the recurring report I get a logon failure.  When I go to the instance to look at the database logon for the recurring instance it shows the old database connection.
    How can this database connection get updated for the current group of instances for these recurring reports?
    Thanks
    BOBJCMC

    Hi Stratos,
    I tried to use .NET SDK to update the Logon_Info properties as below. But it does not seem to be working. Is this the correct way to update logon info for recurring instances?
    infoObject.ProcessingInfo.Properties["SI_LOGON_INFO"].Properties["SI_LOGON1"].Properties["SI_USER"].Value  = <userid>
    infoObject.ProcessingInfo.Properties["SI_LOGON_INFO"].Properties["SI_LOGON1"].Properties["SI_PASSWORD"].Value  = "pwd"
    Thanks
    Ajith

Maybe you are looking for